Sora, Markov Chain, and AI Angst - Part 2

Sora, Markov Chain, and AI Angst - Part 2

Yesterday we began our inquiry into what it is about AI that generates fear among so much of the populace. In Part 1, I addressed the first two of eight reasons provided by ChatGPT. In this post, we’ll tackle the next four, and the remaining two in Part 3.

Privacy Invasion

AI technologies, especially those involving surveillance and data analysis, raise fears about the erosion of privacy. The ability of AI systems to collect, analyze, and act on vast amounts of personal data can lead to unprecedented levels of surveillance and intrusion into private lives. -ChatGPT

A 2023 survey conducted by Pew Research Center found that 81% of Americans are concerned about how private companies use the data collected about them, and 71% of companies are concerned about how the government uses the data they collect about them. Clearly, this is a deep-seated concern.

However, I’m not convinced this should be laid at the door of AI. In a former life, I spent five years in AdTech as both a marketer and product manager. And we collected massive amounts of data on users and households (thankfully I never saw any PII fail to be deidentified). But while I personally worked on projects involving machine learning to optimize bidding in real-time, our data collection practices had nothing to do with AI. What they did rely on was huge feats of data engineering. 

The other thing that kept us in data was GPS coordinates from smartphone apps and other signals derived from IDs and wifi. The more a user is connected, the more we could track them and collect identifiers and behavioral data points. More broadly, I believe that this, rather than AI, is the privacy culprit. In other words, the ship has sailed. Fortunately, there has been significant progress made in consumer privacy regulation. 

In any event, companies use ML/AL to improve their ability to reach target audiences of users with the “right ads”. Meaning, AI is not guilty of collecting the data; rather, it ingests the data that has already been collected or is being piped into the models in real-time. Ultimately, what AI does is increase the possibility that you might actually find one of the ads you see helpful.

The truth is that the convenience and efficiency of smart devices and the internet itself has this exchange built in. I’m referring to monetization. AdTech and data collection practices are byproducts of the need to show ads in order to keep sites and apps free. But to balance that, we have the aforementioned regulation, as well as numerous methods individuals can use to protect their own privacy. And I think (hope) opting-in will be the compromise companies and users eventually make. 

Conversational AI, GenAI, Voice AI

The government is a tough one. Unfortunately, unless the media discovers something, documents are leaked, or a whistle-blower comes forward, there is usually a significant time gap between what the government’s doing around domestic intelligence and when the populace finds out. In regards to how the government is using AI to facilitate data collection, I suspect the answer is “however they want.”

Bias and Discrimination

AI systems can inherit biases from their human creators or from biased data sets they are trained on. This can lead to discriminatory outcomes in areas like hiring, law enforcement, and loan approval, perpetuating and even amplifying existing inequalities.- ChatGPT

This is an absolutely legitimate fear. Data will always reflect systematic bias if steps are not taken to mitigate. In healthcare, there have been numerous disturbing cases where the use of AI resulted in disparities of care or interventions. I won’t go into all of the ways we can and should be addressing this issue here, but would recommend this paper co-authored by fairness in machine learning expert Dr. Na Zou.

Weaponization

The potential military applications of AI, including autonomous weapons systems that can select and engage targets without human intervention, raise fears of a new arms race and the possibility of AI being used in warfare with catastrophic consequences. -ChatGPT

I find this somewhat bemusing. There’s nothing speculative about the threat atomic weapons pose to humans, and for over 70 years we’ve found a way to co-exist. That is not to downplay the risk of nuclear destruction, but rather to say that existential threat from war is something we’ve had to learn to accept. Runner-up to nuclear war on threats that we’re already living with is cyber attacks against our aging power grids. In that scenario, at least, we can be grateful AI is being used in cybersecurity.

Existential Risk

Some theorists and technologists fear that a superintelligent AI, one that surpasses human intelligence in all aspects, could pose an existential risk to humanity if its goals are not aligned with human values and interests. This scenario, often depicted in science fiction, involves AI deciding that humans are either a threat, irrelevant, or resources to be optimized, leading to humanity's extinction or subjugation. -ChatGPT

We’ve discussed this in Part 1, so I don’t want to be redundant. The additional point I will make is that I find it frustrating how, in the context of this specific fear, far too often science and futurism are conflated. Though occasionally futurists get it right (see my literary hero William Gibson).

We'll reconvene back here tomorrow for Part 3.