In June 2009, the veteran lead pilot of Air France 447 was startled awake to find his junior crew in distress. The plane’s automated alert system had failed, and it was sending conflicting alerts. The pilot had three minutes of terror to diagnose the catastrophe.
He ran out of time. Nearly 250 crew and passengers died in the Atlantic Ocean.
That tragedy is a warning of the paradox of today’s technology: Better automated systems will result in less experienced human operators and more unusual situations to confront. It could be a parable of artificial intelligence, and today’s Age of AI.
Following the 20th century’s computing revolution, over the last 30 years, industry by industry was disrupted, brought online and made more efficient with data and software. We call this “digital transformation,” and, according to recent IMF research, the pandemic demonstrated its ubiquitous productivity gains — and how that reinforces inequality for those without connectivity.
Today, entrepreneurship and workforce development are fully entwined with these shifts. Industry laggards and left-behind communities remain. Your job may depend on it, and that won’t end soon. Comprehension always takes time: It wasn’t until 1925 that most American households had electricity and only by 1960 did most have telephones. But it was clear far sooner that both of those technologies were transformative. Likewise, the digital transformation era is over — even if there’s work left to do.
The frontier isn’t making an analog process more digital anymore; The frontier is incorporating artificial intelligence into processes and industries. If a good predictor of a transformative technology is just how much anxiety it produces, AI is bound to be as transformative as many say.
Even AI experts are split on how extreme the risks are. Nearly half of machine learning researchers polled last year gave a 10% probability that AI will result in an extinction-level event for human civilization by 2100. A quarter said there’s no chance of that at all. That big divide among experts has spurred other fields to join in. Wharton professor Philip Tetlock, author of the 2015 book “Super Forecasting”, has forthcoming research in which those trained in probabilities put that level of catastrophic risk from AI far lower than, say, climate change.
The point? Though worth scrutiny, the existential threats from AI are unlikely to be what should worry us. Even massive job loss doesn’t yet appear to be a reality — rich economies may not have enough automation, and most don’t have enough people to fill the jobs we do have. Instead, social media platforms provide a better corollary: They powered true revolutions and globe-spanning connectivity but we’re only now uncovering their insidious psychological effects years after their omnipresence. The answer wasn’t to skip joining Instagram or avoid setting up your company’s Linkedin page. We should experiment with AI tools and uses for our work. But we also need to recognize the more imminent threats.
Automation already creates hazards in three common ways: hiding incompetence, eroding skills and failing in the most unusual circumstances. The last point was identified as far back as 1980 by aviation expert Earl Wiener, whose influential Wiener’s Laws have helped shape how airlines have integrated automation and include the timeline lesson: “Digital devices tune out small errors while creating opportunities for large errors.”
To make personal the tragedy at this story’s outset: How quickly could you look up from your smartphone if your autonomous vehicle alerted you that it had disengaged?
Malicious disinformation and AI hallucinations are no different. Awash with content, we’re quick to be fooled. Be intentional, then, with how you deploy AI in your own products and processes.
Rather than humans monitoring machines why not the opposite? AI chatbots are tireless interns, not limitless sages. Expect “data dignity,” demanding that AI companies cite sources and influences — and respect copyright. Keep human skills sharp.
Futurist Jaron Lanier has dismissed the idea that AI (a term he doesn’t even like) will ever surpass human intelligence. He told The Guardian in March: “It’s like saying a car can go faster than a human runner. Of course it can, and yet we don’t say that the car has become a better runner.”
Humans make at least three contributions that AI won’t replace anytime soon: new relationships, new information and new styles. That means at least two things for those of us building teams and ecosystems:
- Care more about people than companies. AI will not replace what we offer each other. The best recruiting tool you have is not more tech, it’s your people. Tell their story. The recruiting, and businesses come next.
- Invest in a city that learns. Here’s a secret: The easier it is for you to do your job remotely, the easier it will be to automate your job. Learn Java to have a job today. Learn how to manage people to have a job tomorrow. Learn how to learn to have a job forever.
When so much is automated, telling a genuine story about people is the best way to command attention and trust. Intuitively we understand that giving control over to machines is unwise. A tragedy makes the lesson ring louder.
Written by Technically Media CEO Chris Wink, Technical.ly’s Culture Builder newsletter features tips on growing powerful teams and dynamic workplaces. Below is the latest edition we published. Sign up to get the next one.
Before you go...
To keep our site paywall-free, we’re launching a campaign to raise $25,000 by the end of the year. We believe information about entrepreneurs and tech should be accessible to everyone and your support helps make that happen, because journalism costs money.
Can we count on you? Your contribution to the Technical.ly Journalism Fund is tax-deductible.
Join our growing Slack community
Join 5,000 tech professionals and entrepreneurs in our community Slack today!