To be objective is to aspire to knowledge that bears no trace of the knower -- knowledge unmarked by prejudice or skill, fantasy or judgment, wishing or striving. Objectivity is blind sight, seeing without inference, interpretation, or intelligence. -- Daston & Galison, (2010) Objectivity, p. 17
... only those (experiments) that produce quantitative results are really susceptible to scientific evaluation. -- D. L. Hand, (2014) The Improbability Principle, p. 29 
Behavior. Harry punches Sam in the head. Sam falls to the ground. Can you imagine a human, much less a robot, or Martian, able to “objectively” judge, on the basis of what it has just seen, whether a criminal act -- much less an immoral act -- has been commited?
Even wanting to carefully avoid “subjectivity” in their characterizations, few humans would venture such a judgment as being “objective” without believing or wanting to “know more” about the situation. A trained computer scientist would not bet his or her retirement savings on a specific characterization of “what the computer is now doing” from merely photos of the machine and its outputs -- even if cheating a little on “objectivity” with a record of data recently entered.
”Looking Like” ≠ “Being.” As the ancient dictum expresses, “Appearance is not Reality.” But this is not merely an ancient prescription. As Daston and Galison describe in their important book, Objectivity, the concept of objectivity in the quote above is, has been and continues to be common to many scientific undertakings, particularly from the mid-19th to the mid-20th Century.
|Although newer, less restrictive variations of this “blind-sightedness” have developed, the “traceless” notion lingers on in many contexts.
(See Daston, L & Galison, P.2010 Objectivity. New York. on: Structural Objectivity, p. 253; Trained Judgment, p. 309; and, Einstein’s “wholistic” approach to the objective/subjective distinction, p. 305)
This is especially true in public education and social services, where, even as they mostly act in disregard of the “theory” in their practice, special educators, planners and administrators give lip service to behavioristic notions, e.g. functional analysis, stimulus, and reinforcer, as though their efforts were rendered more “scientific” by the adoptions of such vocabulary.
Pedagogy and Community. Which is more “objective?” A beautifully detailed painting of a bird; or, a photograph of that same bird? 17th to early 19th Century scientists worried that if every detail of a real bird were depicted, students, scientists to be, would take the variations to be critically definitive of the species. Thus, they idealized their expensively produced instructional lithographs for the sake of objectivity, e.g. being true to nature, and normative to classification.
Beginning in the mid 19th Century, scientists, having instruments which enabled them to see many, many deviations in phenomena from what people commonly thought to be normal, were concerned not to let local prejudice, cultural assumption or technical ignorance dictate reality. More importantly, machine-aided observation and duplication was cheaper than lithography, and enabled the development of geographically broadly distributed scientific communities.
Neither approach has surpassed all others and later conceptions of “objectivity” developed which still “share the stage” with them, depending upon which scientific discipline or community is involved. (See Daston & Galison, throughout.)
Morals, Law & Ethics. Partisans of different approaches to the objectivity-subjectivity distinction believe(d) it to be a moral duty to assert their position against their mistaken opponents. Truth, enlightenment and moral fibre depended on it. And yet none of them could employ arguments for their positions that were recognized as purely scientific; they required, at least, commitment to ideals of science that were basically a priori philosophical.
An interesting question is whether commonly expressed concerns for improving society, legally and morally, at both communal and individual levels, can be dealt with sufficiently “objectively” to employ science, some science, any science, to answer problems such as criminality, learning, equity, health, and the like. (See Can Science Improve Moral Education, Too?)
The notion of objectivity, particularly the one mentioned in the prologue, is a kind of one-size-fits-all approach. There is reason to believe that practitioners -- recall D. L Hand, above -- who demand that would-be objects of their science be strictly measurable (as “objective” a characteristic as one might want) will discover that their quests are as likely of fruition as is squaring the circle. (See Measuring Educational Outcomes: truth, tricks and hype)
The point? Categories of human behavior, human actions, to the extent that they are purposive, e.g. mens res commissions, deliberate and/or intentional, will not be considered to be “objective” enough for scientific treatment.
This conclusion will no doubt be found obnoxious by those who have long looked to the “human sciences” to lead us into a shining future. (As a long-time practitioner in a “helping profession,” I find it somewhat dismaying, myself.) However, despite doubters continuing their long practice of disregard (i.e., distracting, “non-reinforcing” responses) one awaits a logical, reasoned argument to the contrary.
For examples and to examine these issues further, see The Functional Analysis of Behavior: theoretical and ethical limits.)
 Hand, D J (2014) The Improbability Principle Scientific American. New York.