Ann Arbor Area Business Monthly
Small Business and the Internet
AI, Aaiieeee!
March 2018
By Mike Gould
Asimov, he had some bots, AI AI 0…
Days of Future Pest, The Droidz, 2035
Ah, AI: Artificial Intelligence. Will this be humanity’s faithful servant, à la Star Trek, or our downfall, à la Terminator? “Answer unclear, ask tomorrow”, as the magic 8-ball would say. But right now is the best time to start asking this question, as cyber labs everywhere are getting better and better at this.
But true AI remains elusive; you can lead a computer to intelligence, but you can’t make it truly think. Yet. But things are getting close enough to science fiction that people are beginning to fear for their jobs and fret generally about things to come.
TLDR (Too long, didn’t read – nerd speak for long story short): yeah, big changes are coming but not right away. Now would be a good time to go back to school and learn some computer coding, hydraulics, and maybe some skilled trades chops if you want to remain employable. And social services and nursing; we’ll always need human nurses…
Future Imagined
I have been aware of this issue all my life. A reader of science fiction since I was six, I have a library of around 3K books, mostly concerned with various aspects of the future. Science fiction deals with how mankind deals with technological change, which includes utopias, dystopias, and all of the –pias in between.
Isaac Asimov wrote some of the defining stories and novels about AI back in the 50’s and 60’s – check out The Caves of Steel if you are interested. This novel posits a robot detective with a positronic brain that has most of the characteristics humanity has, augmented with advanced intelligence and empathy reading – sort of a cross between Mr. Spock and HAL 9000, with a little Sherlock Holmes thrown in.
Set 3000 years in the future, this seems plausible; happening in next few years, not so much. We are still in the infancy of this realm, but given the ever-increasing rate of technological change, now is a really good time to have some serious discussions about implications and situations.
Cassandras
We’ll start with no less an eminence than physicist Stephen Hawking, who is not a big fan of AI (URLs for all the following excerpts are below). He said:
I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans. This gets us right into Terminator territory. The satellite-based super AI Skynet achieves self awareness, creates killer robots, game over, man.
In an article reprinted by Scientific American, AI researcher Arend Hinze wrote about key concerns of his (paraphrasing is mine):
Fear of the unforeseen
This is the main issue raised time and again: we barely understand what intelligence is – what are the unforeseen consequences of sharing it with machines?
Fear of misuse
Who sets up the initial constraints and specifications for AI?
Fear of wrong social priorities
What if this becomes yet another tool in the hands of 1% to better control the rest of us?
Fear of the nightmare scenario
Terminator, Hal 9000, The Matrix, Collossus – the Forbin Project, and the many, many more dystopias of screen and science fiction literature.
The blog/aggregator Boing Boing posted an interesting article the morning I’m writing this about why giving an AI control over a corporation would be a really, really bad idea.
Silicon Valley's fears of killer AI are predicated on the idea that AIs would act in precisely the way today's corporations do: i.e. that they'd be remorselessly devoted to their self-interest, immortal and immoral, and regard humans as mere gut-flora… towards pursuance of their continued existence.
But it should be noted that we have a long, long way to go in developing a general-purpose AI that can deal with reality with the same understanding we have. The Wired article below details the limits current AIs have dealing with objects.
Bad News, Good News
So those are the downsides we need to be thinking and talking about; what about the upsides? I mean, we are doing all this to advance the technology we use to understand our own intelligence, as well as doing useful things like analyzing the mountains of data our various other systems gather. We want AI to be doing things humans are not really good at, like predicting weather, financial and social trends, and anything else demanding an objective understanding of our multi-dimensional world. And things that are dangerous or boring to humans: robotically exploring other worlds or the bottom of the ocean, or cleaning up damaged reactors.
We have had limited AI for years: expert systems. These are suites of software that use databases of expert information to make decisions or control other systems, usually focused on single tasks, such as medical diagnosis or stock trading. You scan in an image of an X-ray, for instance, and a computer compares it to millions of other X-rays. Knowing the diagnoses based on those images, the software is able to flag suspicious areas in your lung, or whatever.
The Ars Technica article below talks about a deep-learning algorithm (somewhere between an expert system and true AI) can detect various heart problems by retinal scanning.
A Many-Edged Sword
Like most of our technologies, AI offers great advances and dangers. It is up to us to figure out how best to direct and control it. And this depends on an educated public; so support STEAM teaching in schools!
Fortune - Stephen Hawking Sounds the Alarm on Artificial Intelligence:
http://fortune.com/2017/11/03/stephen-hawking-danger-ai/
Scientific American - What an Artificial Intelligence Researcher Fears about AI:
https://www.scientificamerican.com/article/what-an-artificial-intelligence-researcher-fears-about-ai/
Boing Boing - What happens if you give an AI control over a corporation?:
https://boingboing.net/2018/03/01/what-happens-if-you-give-an-ai.html
Ars Technica - AI trained to spot heart disease risks using retina scan:
https://arstechnica.com/science/2018/02/ai-trained-to-spot-heart-disease-risks-using-retina-scan/
Wired - The Limits Of Explainability:
https://www.wired.com/story/the-limits-of-explainability/
Mike Gould ain’t afraid of no droids. Yet. He was a mouse wrangler for the U of M for 20 years, runs the MondoDyne Web Works/Macintosh Training/Photography mega-mall, is a laser artist, directs the Illuminatus Lasers, and welcomes comments addressed to mgould@mondodyne.com.
Entire Site © 2018, Mike Gould - All Rights Reserved