Within the last year, it feels like there’s been some thresholds crossed. AI is making itself felt to the mainstream consumer. Hey, Google, Amazon, Apple and Facebook between them know more about my life than I do! So now, not only do I not understand the world, but someone else does. And I don’t understand them. That’s scary. That’s time to propitiate the gods with random offerings and pinches of salt territory.
Which is why I am very happy to see initiatives like the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
Back when I grew up—in the last century—we all believed the world was comprehensible. Even when quantum mechanics made the rules a bit harder to understand. Any problem could be solved if you got a bunch of smart and hard-working people on it for long enough. With the possible exception of Middle East politics.
But the dominance of software has made my naïve Newtonian world view obsolete. Were you ever taught that doing the same thing repeatedly and expecting a different result is the definition of idiocy? And how does that jive with having to power cycle three times before your computer finishes its reboot?
OK, some of this complaining is just me getting to be an old grouch. There are lots of brilliant software folks doing amazing things with SW and SW testability. And another pet peeve of mine is the amazingly mistaken arrogance of thinking we understand everything about how the world works. But that’s a post for a later date.
On the other hand, the Renaissance was 500 years ago, the Industrial Revolution was 250 years ago, so it’s likely that we have actually moved on to a different era. My fears are not only about what this new era will be but whether there will be a painful inter-regnum to get there. When you stop believing that you understand that the world works by scientific rules, then competing alternative systems of thought could vie for dominance. If that mixed state persists, to use the language popularized by President Trump, it’s a mess.
In the words of its mission statement, technologists’ goals need to be expanded beyond technical performance metrics to include the well-being of humans. It identifies trust between humans and technology as a critical requirement for the technology to succeed.
I have to admit I glazed over while reading the very lofty principals put forth as the guiding principles of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.—human rights, legal responsibility, blah, blah, lawyer stuff. Reminded me why I didn’t go to law school.
Until I got to #3 Transparency, which states that it should be possible to discover how and why the artificial intelligence or autonomous system (AI/AS) made a particular decision and acted the way it did.
Trust relies on believing it is possible to understand why. Transparency in AI/AS won’t just allow investigators can assign blame after an incident. It won’t just help designers debug the system. It isn’t just to speed adoption of autonomous vehicles. It might forestall a Dark Age. Turns out my fears about threats to our belief in causality and science are really caused by a perception of eroding trust in software. Turns out ethics and “soft” values are critical to holding up the supposedly “hard” world view based on reason.