I suspect that we are listening to both Hedgehogs and Foxes.
I want to look at AI in one, very narrow, area – the use of AI to improve the hiring process. One of the Hedgehog articles effused about the potential for AI (including machine learning) to significantly improve the hiring process, resulting in companies and organizations hiring people with much more of the right fit for the job and the company/organization.
I find two things about such a prediction that bother me:
- GIGO – the old programmer acronym, Garbage In, Garbage Out. There are actually two “garbage” streams possible in AI
- The programming stream.
- While researchers, software architects, software engineers and programmers are getting better at making software do “intelligent” things, there are an infinite number of ways in which the software can fail to yield intelligent results. That problem will never go away, but the “false positives” can be reduced to a level that is lower than the human hiring process, but there is little to no evidence that we’re there yet.
- Another programming issue is that we really don’t know what “intelligence” is on the human level. It’s difficult to mimic something when you don’t understand the something that you’re mimicking.
- The programming stream.
- The input stream.
- The job description must accurately capture what it is that the hiring leader wants the person to do. I’ve posted before about the ways in which this effort goes awry. The leader may want the “person who just left” or they may use the job description that was used the last time without regard to how the company or the job has changed. The leader may establish too stringent a set of requirements, or too loose a set. The required skills may have little to do with the job. Writing a job description is both extremely important for AI to work and extremely hard – the machine cannot apply judgment.
- The resumes must accurately reflect the actual experience and accomplishment of the applicant – not merely a clever regurgitation of the job description. A clever applicant might be able to hoax the computer with a kind of reverse Turing Test (Alan Turing was a computer pioneer – back in 1950 – who opined that a computer would be “intelligent” when it could “converse” with a person and the person could not tell it was a computer).
- Humans have begun to accept the output of computers in an alarmingly blasé manner. I just watched the movie Hidden Numbers in which a human being (one of the heroines) had to check the results calculated by a computer (the output of which had been checked and found correct numerous times previously) because on two separate runs, the computer had generated significantly different results. Working under a pre-launch countdown deadline, she calculated the correct results in time for John Glenn to take his historic orbital flight. But she was only able to do that because someone looked at the computer results and did not accept them. I’m concerned that such non-acceptance is becoming increasingly rare. Leaders challenge assertions made by people, but they seldom challenge the results barfed out by the computer.
It seems to me that AI has a long way to go before it’s likely to be able to match human sophistication in some areas – like hiring. That does not mean that humans execute the hiring process well – only that I don’t think a machine is going to do it better right now. Computers, even those with AI routines are tools – they cannot be relied on blindly and, like any tool used expertly, can enhance human performance.
Thoughts.