Sora, Markov Chain, and AI Angst - Part 3

Sora, Markov Chain, and AI Angst - Part 3
To be or not to be...

Today, we’re concluding our discussion on what’s behind the popular fears regarding AI. If you haven’t already, check out Parts 1 and 2

Dehumanization and Ethical Concerns

There's a concern that the widespread use of AI in areas such as care for the elderly, children's education, or even companionship could lead to a dehumanization of society. People worry that replacing human interactions with AI might erode social bonds and ethical values. -ChatGPT

As a company that facilitates human engagement with AI through voice, conceivably we might want to sidestep this point. But I’ll address it head-on, as I don’t see it as an indictment. AI is like any technology: its morality lies in application, not existence. We’re excited about the development of AI because we see the myriad of ways it can help humans accomplish tasks. If scientists and engineers operated under the belief that risk of abuse outweighs technological advancements, we’d simply never advance. 

Speaking directly to the points above, these are all areas where the responsibility lies with humans. Elder abuse and neglect has always been a societal issue; it’s up to us as a society and individuals to choose how we care for the elderly. The same holds true for education. EdTech has some incredibly exciting things going on, and I have a hard time seeing how the demonstrable benefits to students could be outweighed by speculative misuse. 

And yet again, in regards to human interaction, it is contingent on us to ultimately decide how we interact with other humans. I share concerns about the effects of people spending too much time in front of smartphone screens. But I believe the responsibility lies with me to maintain personal contact with others in my life, and regulate how much time I spend on devices. And in any event, coding and creating with a computer is a positive end unto itself; time behind a screen is not inherently worse than any other activity. 

Finally, applying human ethical concerns to AI reminds me of the danger in attributing human feelings to animals. I spend a fair amount of my time working out of a ranch in Texas. An important lesson for anyone in being around bigger animals like horses and donkeys is, however much we love them (no one ever forgets the joy of companionship with a good horse), they are incredibly powerful and have hardwired instincts. If a horse spooks and bucks the rider, and the rider is injured in the fall, the horse will not “feel guilty”. In fact, it’s unfair to animals to saddle them (pun intended) with human emotions. We lose sight of this at our peril. 

Likewise, AI is not in itself a party to human ethics. As I said above, the ethical implications are in how humans apply it. This is again an issue where I feel important discussions around technology ethics are derailed by futurism. 

conversational ai voice bot
Giving llamas hay during big snow

Economic and Social Disruption

Beyond job loss, AI is expected to cause broad economic and social disruptions. The fear is that the benefits of AI will be unevenly distributed, exacerbating wealth inequalities and creating new divides in society between those who control AI technologies and those who are affected by them.-ChatGPT

At the risk of sounding like a broken record, this is again a case where fears over AI are actually fears over how humans will use AI. Disparities in wealth and power are obviously a legitimate concern, but they have been throughout human history. I personally don’t believe the evils that powerful bad actors have perpetrated in the past will be somehow surpassed because of the existence of AI. 

I’ll leave the question of whether or not society devolves into a technocracy to others. Instead, we’ll end with ChatGPT’s response after it listed these 8 reasons–which I concur with: 

These fears are not entirely without basis, but it's also important to recognize that AI presents significant opportunities for positive change, including medical advances, environmental protection, and the potential to alleviate some forms of scarcity. Addressing these fears effectively requires careful management, transparent and inclusive policy making, and ongoing ethical considerations in the development and deployment of AI technologies.