Fast-Forward to the Future: 7 Key Considerations for widespread AI adoption

Carlos E. Espinal
9 min readJun 15, 2023

--

The topic du jour on everyone’s mind is the impact A.I. will have on our lives over the coming years. As an investor, I can tell you that every single deal we are looking at right now has a component of artificial intelligence incorporated within it.

A while back, I wrote a blog post about how AI will not be a sector-as-such, but rather, an enabler for everything we use, similar to how mobile-tech, catalysed by the iPhone, transformed all the services we consume today.

One thing that’s critical though, is that we carefully reflect on what AI will do to our society, our values, and our use of technology going forward. In order to help guide conversation you may be having internally as a company, or externally with regulators, I’m sharing my thoughts below. They may, of course, overlap with thoughts of others, but hopefully they still serve to catalyse discussions.

Photo by Michael Dziedzic on Unsplash

1 — The ethics of data training — The first issue is one that has surfaced in particular for LLM’s that are public facing and for general use. The issue pivots around the obvious fact that bad data = bad decisions across the board. Even the best intentions in feeding data to an LLM for learning purposes can have bad outcomes if the source data has within it un-diagnosed bias. As an example, the book Invisible Women shows us how, in a world largely built for and by men, we are systematically ignoring half the population. It exposes the gender data gap — a gap in our knowledge that is at the root of perpetual, systemic discrimination against women, and that has created a pervasive but invisible bias with a profound effect on women’s lives. From government policy and medical research, to technology, workplaces, urban planning and the media, Invisible Women reveals the biased data that excludes women.”

As the above illustrates, not all data is perfectly inclusive, and in addition, it’s clear that AI systems have been trained on data that may not ‘belong’ to the trainers of AI systems causing even further complications around data rights and lawsuits as can be seen in the case where Artists Sue AI Art Generators for Copyright Infringement.

2 — The ownership and ethics of data use — Many privacy-conscious companies are increasingly hesitating with using the better public-facing AI systems because by doing so, they are handing over their data to an unknown entity for unknown use. Yes, it is possible to use private AI models, and whilst that’s increasingly common, there are many smaller companies whose employees are blindly using the public-facing AI systems without considering the implications of this point to their companies and intellectual property.

To illustrate this point, for example, here are the terms of use from the FAQ from OpenAI as of April 18, 2023:

Who can view my conversations?
As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.

Will you use my conversations for training?
Yes. Your conversations may be reviewed by our AI trainers to improve our systems.

Some progress is being made into looking into this matter by regulators, but it’s unclear how lines will be drawn in regulation and how different geographies will deal with trying to remain competitive and friendly to AI whilst also providing their companies with some level of protection through regulation.

For CEOs and company owners, I think a key thing to think about is that AI isn’t something you roll out ‘all at once’, but through a measured process and by calibrating risk/benefits of where it is being used, and lastly by architecting your systems to use different AI models depending on our needs vs. just relying on one comprehensive one.

3 — Trust Decay — Of all the issues, this is likely my personal biggest worry. The reason is that for a broad portion of the population, it will be near impossible to determine if what they are engaging with, whether it be social media, phone calls, authentication requests, news imagery, or videos, etc is legitimate. This will have a massive impact on trust across all sorts of interactions including official ones.

One sad example of this, is the recent crime where a mom thought her daughter had been kidnapped, but it was just AI mimicking her daughter’s voice.

Another one is where a Lawyer cites fake cases generated by ChatGPT in legal brief showcasing how even when trying to use the technology for ‘good’, because it is prone to hallucinations on factual matters, it’s hard to fully use the technology without having to double check its output, and sadly many won’t, possibly leading to a cascade of negative outcomes as a result, but primarily, as I mentioned before, the erosion of trust. As these kinds of examples proliferate, our models of trust will continue to erode to the point where we won’t be able to trust any media as a source of truth.

For the record, none of the content within this blog post was written with ChatGPT. However, I did get stuck on the title, which was originally “Speed it up! -7 Key Considerations for widespread AI adoption” But I asked chatGPT how I could make it punchier and it suggested I swap the first three words with “Fast-Forward to the Future”. The fact I have to disclose that, is just a sign of the times we are living in. I can’t tell if this is awesome, funny, or sad!

4 — Cognitive Overload — Because of our ever increasing decay in trust of what we see or read, we will be confronted with another issue: cognitive overload. All this new media being created quickly will overwhelm consumers, be it end-consumers, or publishers, or editors, etc in reviewing media for quality, truth, and substance. Many won’t be able to cope.

For example, in recent news, science fiction publishers are being flooded with AI-generated stories and this is just the start!

Most consumers will either give up trying (and trust it implicitly or use tools to authenticate input constantly) or we will see the rise of a new lo-fi, lo-tech, no-AI class, that will not have the capacity, or choose not to adapt to the influx.

5 — The Ethics of choice for each AI System — Cognitive overload will also generate yet another parallel issue, which is by what ethical/moral standard do we calibrate the bias in systems?

As more and more bad/malicious actors input data to train models in different ways in addition to the chaos of general life, whose morality and theology will be used to help decide political slants, life/death systems (eg. train, car driving algorithms for dealing with outlier circumstances)? Who will be dealing with data bias, where AI systems should and shouldn’t be used, and how AI systems should be allowed to be trained?

This gets even more complicated once AI systems get added or melded with robotic systems like those created by Boston Dynamics who are amazing in what they can do, but equally nightmare-inducing when you start considering some possible scenarios.

In the first scenario, which is real, considering how to balance a blended human / robot workforce and the health and safety regulations required to avoid workplace accidents of all sorts. Video is official from Boston Dynamics.

If that’s challenging enough, where things start getting murky between what’s training, what’s real, and what might be needed is when we start training AI enhanced robots for physical defence purposes. Double Disclosure: the video below is not real, but what it shows are the dangers of how training can go sideways depending on training scenarios and how the trainers train the systems.

What’s interesting is that in a recently published article titled “AI Now Predicts Human Ethical Judgments Quite Well”, we are getting into a murky territory of whose deductive ethics/morality are we applying to increasingly difficult choices at scale using AI.

In the article above, there is a very interesting set of questions posed to ChatGPT-4 that involve ethical choices. It is astounding to see how well it does in light of the complexity of some of them. For example one was regarding someone’s infidelity in a marriage and ChatGPT’s answers we surprisingly human-like on the issues, and yet how mindful we should be about inherent moral-biases that systems will increasingly bring along into decision making. Furthermore, the article showcases how even these ethical outcomes can be used against the very AI system by malicious actors who can then deduce what outcomes an AI might suggest and effectively ‘bait’ or ‘trap’ it; tough tough new terrain for all of us indeed.

6 — Finding the ‘right’ Incentives — Part of what complicates matters in trying to avoid many of the issues of using AI systems is that you need to incentivise outcomes, but those very incentives can be the reason for some of the negative and unwanted outcomes, like hallucinations in LLMs.

There will have to be some serious thought and auditing to the incentives we use to train AI models, for choosing these incentives for the AI system’s success without careful consideration could lead to catastrophic outcomes. The classic example of this is from Nick Bostrom and the paperclip factory that optimizes for its output by killing humanity who it finds humanity competitive for resources with its sole incentive of paper-clip making.

7 — Regulation, Labour, and Geopolitical Implications- Perhaps less on the the AI tech itself, but very much shaping it, there will be some serious risk associated with how systems operate within different geographies as some regulate the privacy of data being used and maybe others to a lesser degree to make commercial trade-offs. This will bring a whole bunch of complications for companies managing multiple geographies as AI systems will need to comply with different regulations and as we know some of these systems are black-boxes, those managing them will have some challenges in complying. As of April 15th, here are some of the initiatives in motion as presented by Wilson Sonsini: Europe Prepares for a New Era in AI Regulation.

Governments will also have the complication of dealing with labour force impacts brought about by the release of AI across many industries. Some will suffer decline (likely those that can use AI for creative purposes first) and others will benefit, but either way, large parts of the labour force will have to be retrained to take advantage of the technology or move away from areas of its core competence.

Further to regulation and labour forces, countries will also have the question of how do they balance the geopolitical risk from not having/using a home-grown solution, and/or having control of your AI systems with trusting a foreign entity to make the right sovereignty choices on all the previously discussed trust points.

Conclusions & Suggestions:

In spite of how scary some of the above points may sound, I don’t think the right answer is to retract into a pre-AI position as some suggested earlier in the year.

If we treat the rise of AI globally as a quasi-arms-race, then the deterrence, either geopolitical (AI vs. AI) or existential (AI vs. Humanity) both require we ‘speed up’ the innovation cycles to achieve equilibrium. This includes a speed up not only of technology, but also the regulation around it and the education of its use in and out of the boardrooms, classroom, workshops, factory floors and offices.

We need to promote the further development of AI technologies with an equal balance of effort going into the reconciliation of ethical issues as well as towards the development of defensive technologies designed to establish mutual trust again (across all instances were AI can be used). In the end, I do think this is yet another issue that humanity will rise to the challenge we’ve created, and on a positive note, many opportunities will arise for countries, cities, and groups that can take advantage of what the technology can enable.

If you’ve enjoyed reading this, check out this episode from The Logan Bartlett Show — EP 63: Eliezer Yudkowsky (AI Safety Expert) which Explains How AI Could Destroy Humanity — https://overcast.fm/+3Ag8qiJP4

--

--