A Long View on Artificial Intelligence

Editor’s Note: This article was originally published by Law & Liberty on March 1, 2024 and is crossposted here with permission.


Over the past year, artificial intelligence has become a subject of widespread public interest and concern. This is mainly thanks to new Generative AI models, such as ChatGPT and Bard, which have brought AI unprecedented attention and adoption. For the first time, it’s easy for the general public to use machine learning models. The phenomenon is so big that Rolling Stone covered the firing of Sam Altman, the OpenAI CEO, like a celebrity story. Venture capitalists are pouring money into Gen AI, despite overall weak macroeconomic conditions. Meanwhile, prominent AI scientists proclaim, “Advanced AI could represent a profound change in the history of life on Earth.” The media is full of contradictory predictions that AI is going to take our jobs, destroy our creativity, fight us with killer robots, and abolish term papers (the last one might be less of an existential risk). This is all on top of the worries about the classic AI being biased and prejudiced.

Recent advances in AI are truly significant, but the hype is overblown. Gen AI passes the Turing Test, relatively non-controversially, and is also acing tests meant to measure human knowledge, e.g. the LSAT and AP Exams. It obeys Amara’s Law, which states, “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” AI will continue on its current, mostly mundane, somewhat delightful course, but the pace will accelerate.

The trajectory of Gen AI is promising, but it would be a big mistake to extrapolate too far. This is not the first time people have worried that artificial human-level intelligence might be within sight. In 1950, Turing believed it was about 50 years away, upon the arrival of computers with 5MB of memory. Minsky thought Gen AI would emerge around the 1990s, which was about 30 years away at the time. When Deep Blue beat Kasparov, Newsweek called it “The Brain’s Last Stand.” Times of little progress, the so-called “AI Winters” followed these pronouncements. This hype is called the “Eliza Effect,” in which AI’s natural language abilities, often just clever hard-coded rules, cause excessive optimism about AI in general.

The media is focusing on Generative AI, yet most of the publicly available, widespread applications are little more than wrappers around the foundational models, like ChatGPT and Bard. In actuality, the less flashy models, classic machine learning models, are actually having a much more significant effect. Almost all AI in production uses traditional algorithms, e.g. linear regression or decision trees. They are embedded into applications, products, and services that perform prosaic tasks like pricing airline tickets, detecting fraudulent transactions, increasing energy grid efficiency to reduce emissions, recommending similar products, automatically crafting playlists, personalizing ads on social media, and supplying better and longer-range weather predictions. They largely go unnoticed in the background, behind the scenes.

For example, the public is mostly unaware of the great, but largely unobservable, strides in manufacturing AI, e.g. optimization of production processes, improvement in quality control, better scheduling of production lines, and automated detection of defects in products. This has improved product quality, reduced price, and enhanced safety and security. Of course, incorporating these technologies into new plants has also replaced unskilled floor labor with highly educated computer scientists and engineers. The societal implications are complicated.

Additionally, while clearly not an equivalent replacement for professionals, AIs provide services that were previously only available from experts, e.g. counseling services and financial advice, as well as Khan Academy, Duolingo, and similar other tools. These platforms democratize these services, vastly increasing the availability and affordability, and often increasing the personalization. Because they incorporate crowdsourcing and tacit feedback mechanisms, they will automatically adapt with both implicit and explicit feedback, and continue to improve.

Small “edge” devices, rather than large servers, are also powerful enough now to run AI algorithms, e.g. vacuuming robots, map applications on our phones, and, of course, the Alexa-enabled soap dispenser. Using AI on the edge, with sensors and high-resolution cameras providing better data to the algorithms, also allows self-driving car algorithms to handle increasingly complex situations. As the edge gets more powerful, so too will the ability for our devices to customize and personalize our experiences with them.

Machine learning has made inroads in healthcare, offering AI-powered diagnostic tools, the tedious process of scheduling appointments and managing patient records, as well as personalizing treatment plans and medicines. AI research has turbo-charged the work behind the therapeutic discovery, e.g. discovering new drugs and making medical devices more effective, including speeding up work in gene expression and designing functional proteins. This AI-powered flywheel will only speed up as computing power becomes more powerful, data becomes more plentiful and linked, and new computational biology algorithms are discovered. Many of the most powerful generative AI applications will be private, proprietary models in areas like drug discovery.

One potential solution to lessen the burden of human oversight, is Constitutional AI. A “constitution” containing a framework of ethical guidelines is laid out by humans, which are then encoded in algorithms and technical constraints.

In both Gen AI and classic machine learning, models get better as they incorporate more and better data. Among the popular Gen AI models, the diverse training datasets include books, websites, social media conversation, and computer code. For example, while the details of the training data are proprietary, OpenAI has shared that ChatGPT 3.5 is more conversational because it was trained with about a third more data than 3.0, including more dialogue data. They continued this trend for GPT 4.0, including not just 25 times more data, but also a wider source diversity, including images. Thus, the race to improve Gen AI will be a race to acquire more data and new data sources, such as sensors, smart devices, social media, and videos.

Reinforcement Learning from Human Feedback (RLHF) is a vital part of the state-of-the-art models. Humans guide the models with examples of the desired output. This is far superior to previously used methods in which AI gave itself feedback and has led to improved performance, reliability, and safety. However, it is also costly to employ the trainers. Work continues on Gen AIs, tamping down the hallucinations, increasing performance, and protecting them against adversarial attacks.

The current models are general purpose, designed to give users the experience of asking all of their questions to the smartest person in the room. Research is underway to develop specialized models, which would give users an experience equivalent to asking certain questions to mechanics and doctors. Several models are being used to further science. AlphaFold was one of the first Gen AIs to gain fame by predicting protein folding structure, a key step in identifying interesting compounds to be made into medicines, pesticides, and other chemicals. ChemBERT provides a similar service for predicting which compounds might serve as useful targets for therapeutics and other uses. These models increase the efficiency of science by starting to divorce it from the expensive and time-consuming lab experiments. There will be incredible benefits in the coming years.

Controlling Gen AI

The cost for training a Gen AI model already exceeds $100 million and will grow with model complexity. Expensive, high-end hardware is needed to train the models. Additionally, the data acquisition is costly, particularly as more difficult data sources are processed. Additional costs include compensation for the people engaged in RLHF and even the electricity to training and inference hardware. Thus, the process of training Gen AI remains firmly in the domain of larger corporations and extremely well-funded non-profits.

This greatly limits who can produce and control models. Since these models’ responses are largely a function of their input data, they are by nature biased, although potentially accidentally. Additionally, some are concerned that companies could hinder access to the models, although there is currently enough competition that this seems not to be an imminent risk. Likewise, especially with the lack of accountability in Gen AIs, there is a worry about the privacy and security of these models.

The open source models are competing against the proprietary models. They are pre-trained, thus, the user is spared the expense and effort to train a model themselves. They are generally released under licenses that allow the public to use them for free, alter them, and share them with others. Commercial use is often restricted, although not always. Open source implies a shared responsibility to help maintain the model by suggesting improvements and fixing bugs. Without the open source models, just a few companies would dominate the foundational model market. Thus, there is a passionate interest in supporting and maintaining the open source community for LLMs, in order to ensure that access is democratized.

The Gen AI open source community is currently thriving at Hugging Face, which hosts models, benchmarks, and discussions. The open source community will remain strong, but it remains an open question whether the performance will catch up to the proprietary models because of the expense involved in training them. Private companies have been contributing far less to the open source LLMs. Google, previously famous for publishing foundational science around Gen AIs, has recently stopped. Their previous work, including Attention Is All You Need, was sufficient to jumpstart Gen AI.

Of course, the foundational models are just that: foundations. While only a select few can train them, they are publicly accessible through interfaces. A huge ecosystem of companies is building applications that leverage the models. At the same time, they are also building tools to fine-tune them and to better prompt and manage them in production. Thus, even if open source doesn’t catch up, a lively, competitive market for the proprietary models might suffice.

Responsible AI

Irresponsible AI can be deeply malicious, enabling automated warfare, surveillance, censorship, and propaganda. But usually, misbehaving AIs are just incompetent or jerks. The types of errors they commit might lead to a resume screener rejecting people of a certain demographic or a utility control system becoming unreliable. These ubiquitous risks are much more prevalent than the extremist, existential fears that catch media attention. Humans have many of the same issues as AIs, including bias and incompetence, but because AIs often make different errors than people, they bring in an additional element of unpredictability.

With the increasing power and prevalence of AI systems, both governments and citizens demand Responsible AI (RAI) initiatives to ensure they are deployed safely and ethically. They call for a range of things, from guidelines to corporate policies to stringent regulation. Concerns often overlap with Big Data issues, e.g. privacy and consent, and there are also concerns about biased decisions, calling for model transparency and explainability. Several organizations, like NIST and the Future of Life Institute, have put forth guidelines about making ethical models, including the requisite human governing processes.

As an illustrative example, traffic cameras, sensors, and GPS devices create massive amounts of data from the transportation infrastructure. This is leveraged to measure current traffic as well as predict future flow, which in turn enables these devices to help traffic run more efficiently and safely by optimizing traffic light timing, adjusting speeds on variable speed limit signs, and rerouting traffic (and emergency vehicles) around accidents or congestion. The result is significantly reduced travel times and fuel consumption, and thus, emissions. But, of course, there are also severe privacy concerns that stem from extensive surveillance and tracking of mobility, including more privacy-compromising methods that would have cars share real-time positioning.

Corporations want to create ethical models to maintain their reputations, including avoiding damaging headlines about bias. However, implementation of RAI is tricky, particularly for the more advanced algorithms, which tend to have the best performance, and thus are generally preferred. Remediating biases found through auditing the models can be difficult because models mostly perpetuate the prejudices of the data on which they are trained. Changing the underlying training data can be expensive and sometimes impossible. There also tend to be many people involved in the implementation of applications built in AI, making it difficult to audit, and then attribute responsibility and blame. Likewise, the rapid progress in AI makes it difficult for the frameworks and regulations to keep pace.

AI efforts will trend towards ensuring responsibility, independent of regulatory efforts. Companies will seek to make reliable and fair models as a matter of good corporate policy, particularly in sensitive industries like healthcare, finance, and criminal justice. Desires for RAI will not be categorically different from the corporations’ desires to be fair and unbiased in other areas. To avoid the risk of embarrassment, organizations will seek transparency in their algorithms. However, given that model performance is often at odds with explainability, efforts to implement RAI will be tempered.

Human supervision will play a big role in ensuring RAI within corporations and governments. There is already panic about AI manipulating the upcoming US elections, from social media to deepfake videos. The populace will get more sophisticated at detecting these manipulations, in the same way that it has learned to look for Photoshopped images. Likewise, organizations will converge on a few accepted standards, similar to what happened in data privacy.

One potential solution to lessen the burden of human oversight is Constitutional AI. A “constitution” containing a framework of ethical guidelines is laid out by humans, which are then encoded in algorithms and technical constraints. The models are trained using datasets that conform to the constitutional values, while being evaluated for their adherence to constitutional principles. Oversight to determine whether they have followed the constitution can be assessed by an independent AI that is periodically audited by a human. For example, humans can lay out ethical guidelines for criminal justice decision-making, and use both classic algorithms and LLMs to determine whether they have been satisfied. Occasionally, there will be a review process to ensure compliance.

There is a fair amount of hysteria surrounding general intelligence, in which artificial intelligences would learn and perform all intellectual tasks on the level of a human. Previously, shrill warnings about existential risk of this emanated from the fringes of society, but now they are being raised by prominent members of the research community. What is lacking from these cautions is a credible explanation of how this might occur, or why an AI could not just be shut down at the first sign of trouble. The Precautionary Principle prevails, urging a pause or even discontinuation of research. This hysteria distracts from the concrete, current risk of AI violating privacy, making prejudiced decisions, etc.

One can debate whether AI art is actually creative, but it certainly has been moving from direct imitation to creating more novel work. As a result, the type of art that humans do will likely be changed.

While more of a Big Data problem than an AI-specific problem, the US courts have begun to hear cases about copyright enforcement in Gen AI-generated content. Without a clear trace of the source content, which is very difficult in Gen AI training, it is difficult to determine whether Gen AIs are creating “infringing derivative works.” Of course, this is a gray area for human artists as well, who get inspiration from each other. The US Copyright Office has ruled that human authorship is needed for copyright. Over the next few years, we will see people fighting the technical and legal challenges, trying to protect their content from being added to the models.

Countries have different regulatory approaches for AI. Australia, Israel, the UK, and India apply existing privacy laws to regulate AI. Many countries, e.g. Singapore and those in the EU, are discussing additional AI-specific regulation. Some countries, like France and Cuba, are outright banning certain AI services, like LLMs. The US government is currently just issuing guidelines, although there is the risk the US will have a disparate collection of state laws, creating a complicated regulatory landscape. Naturally, companies developing applications based on AI will shy away from developing in and selling to areas with vague and confusing regulation, a trend that is already being seen in venture capital investment in these regions.

Concerns about Jobs

Many people are understandably concerned about the effect of AI on the labor force. This sometimes sets up a false binary between the benefits of AI and the benefits of employment. But the question “Will AI take our jobs,” is better phrased as a question of how humans will use AI to do their jobs. AI will handle highly repetitive tasks that don’t need creativity, but only in rare cases will entire jobs disappear. For most of the history of America, roughly 5% of the jobs have become obsolete every year. There are some exceptions, as when robots replace humans for hazardous tasks, or in situations where faster response times are necessary. These changes are the continuation of a trend that has lasted for over half a century, as computers take over more complicated routine tasks, with humans handling the more complex versions. As the pace of innovation has accelerated, the worry is that the rate of job destruction will also increase. Innovation is famously hard to predict, but it likely will be offset by the faster rate of job creation from LLMs.

Many of the new AI services, from chatbots to self-checkout, are strictly less good than a human providing the same service. We also have inherent biases that lead us to judge human mistakes as less bad than AI mistakes. We would rather have a diagnosis from a less accurate human doctor than an AI. It is also true that AI can have a disproportionately negative impact on certain demographics or types of workers. The accelerated pace makes it harder to retrain and re-skill workers in a timely manner. Just as the spreadsheet replaced the bookkeeping clerk but increased the demand for accountants, AI will replace much of the paralegal’s job, but less of the lawyer’s.

For many workers, the Gen AI will be a co-pilot, jumpstarting the creative process. A content creator, whether they are a real estate agent writing a listing or a student writing a paper, can begin with material provided by the model and then iterate, as was done with an encyclopedia entry. Of course, there are still serious limitations to these models, especially when it comes to diversity of styles. The first “Write a poem about…” prompt to ChatGPT is amazing. The fifth looks a lot like the previous four. Preliminary research demonstrates productivity gains from AI, as when ChatGPT helps with mid-level writing. A similar dynamic can be seen in some STEM fields. Although it has the abilities of an entry-level engineer, GitHub CoPilot also helps people write code. Thus, LLMs can debug code and write simple functions, but we are far from the machine being able to rewrite its own code better, the so-called Singularity.

The prospect of Gen AI replacing artists and writers has also caused pervasive worry. One can debate whether AI art is actually creative, but it certainly has been moving from direct imitation to creating more novel work. As a result, the type of art that humans do will likely be changed. The advent of the camera didn’t destroy painting, but it made it less common. It also created the completely new art of photography. For documentary evidence or quick images, photography wins. For deep emotion or abstract art, painting still dominates. Additionally, the act of painting still has value for the human, regardless of how it compares with the machine-created equivalent. We will likely find similar patterns in how AI-created and human-created content are used in the future.

So what’s next? Nobody is sure, although everyone seems to have an opinion. The only definite thing is that AI will take us in unexpected directions. Who would have thought thirty years ago that everyone would have a phone in their pocket, and yet nobody would call each other? To make some conservative predictions, AI will accelerate drug discovery. The military is going to continue to build the best weapons they can, which very likely means autonomous weapons with quicker response times, like those currently fielded in Ukraine. AI will exacerbate some social inequalities, and solve other ones. Where we end up on balance is anyone’s guess, although technological progress usually makes us better off. We can be sure that these tools will become our new normal. We can also be sure that someone is going to try to regulate them, as the media continues to spread panic.


Photo by Jared Gould — Adobe — Text to Image

2 thoughts on “A Long View on Artificial Intelligence

  1. Today, artificial intelligence technical innovations greatly simplify the system of forming logistics routes and help the optimisation process. In general, this is the near future of the logistics and transport industry. I would just like to note that AI technologies are still unable to calculate all geopolitical risks in the process of global long-term planning. At Shipstage https://shipstage.com/ , we have seen and felt this in our operations, given the changes in the global logistics system during the COVID-19 pandemic. Then there was Russia’s aggression against Ukraine. Currently, the difficult situation in Palestine is calling into question many planned projects related to transportation within the Middle East. Unfortunately, the capabilities of artificial intelligence programmes cannot yet fully predict such risks or influence their overcoming.

  2. “…better scheduling of production lines…”

    “Just in time” stopped being fun during Covid when we first ran out of toilet paper (much of which is now made in Canada) and then damn near everything else. All it would have taken is a major snowstorm to have really screwed things up.

    Yes, there is an individual cost saving in not having inventory in stock, but there is a collective national security benefit of people having it. Likewise the public benefits from one truck making one delivery rather than 50-60 different UPS & FedEx Ground trucks delivering each item separately. It’s what computerization has enabled as opposed to caused directly, but there are inherent costs involved.

    “Machine learning has made inroads in healthcare, offering AI-powered diagnostic tools, the tedious process of scheduling appointments and managing patient records, as well as personalizing treatment plans and medicines.”

    Which stops fun when the computer hallucinates a pacemaker that you don’t have — and then proceeds to “personalize your treatment plans and medicines” on the basis of a pacemaker that you don’t have…”

    I know someone who has been dealing with this for the past two years. The problem is that while the MDs are individuals, the computer is unitary and tells all of them the same wrong things. Getting a new MD is not an option because the computer will still say the same wrong stuff, it’s really kinda Orwellian.

    I don’t see hanging mistakes like millstones around patient’s necks to be a good thing — and when we are talking about something as clearly visible as a pacemaker that isn’t there, it shows the extent of this becoming a problem.

Leave a Reply

Your email address will not be published. Required fields are marked *