How Should an AI Explore the Moon?

Image for article titled

photo: University of Alberta

Rapid advances in artificial intelligence (AI) have prompted some leading voices in the field Call for a research breakincrease the possibility of AI controlled human extinctionand even Ask for government regulation. At the heart of their concern is the idea that AI could become so powerful that we lose control of it.

But have we overlooked a more fundamental problem?

Ultimately, AI systems should help people to make better and more accurate decisions. Yet even the most impressive and flexible of today’s AI tools – like the large language models behind ChatGPT – can have the opposite effect.

Why? They have two key weaknesses. They do not help decision makers understand causality or uncertainty. And they incentivize the collection of vast amounts of data and potentially encourage lax attitudes toward privacy, legal and ethical issues, and risk.

Cause, Effect and Trust

ChatGPT and other “fundamental models” use an approach called deep learning to sift through huge data sets and identify associations between factors contained in that data, such as language patterns or associations between images and descriptions. As such, they are great for interpolation — that is, they can predict or fill in the gaps between known values.

Interpolation is not the same as creation. It does not generate knowledge or the insights required for decision makers in complex environments.

However, these approaches require large amounts of data. As a result, they encourage organizations to create massive datasets – or to search through existing datasets collected for other purposes. Dealing with “Big Data” carries significant risks related to security, privacy, legality and ethics.

In low-stakes situations, predictions based on “what the data suggests” can be incredibly useful. However, if there is more at stake, we must answer two more questions.

The first is about how the world works: “What drives this result?” The second is about our knowledge of the world: “How confident are we about this?”

From big data to useful information

Perhaps surprisingly, AI systems designed to derive causal relationships do not require “big data”. Instead they need useful information. The usefulness of the information depends on the question at hand, the choices we are faced with, and the value we place on the consequences of those choices.

To paraphrase the American statistician and writer Nate Silver: amount of truth is approximately constant regardless of the amount of data we collect.

So what’s the solution? The process begins with the development of AI techniques that tell us what we really don’t know, rather than producing variations on what we already know.

Why? Because this helps us to identify and collect a minimum of valuable information, in an order that allows us to disentangle cause and effect.

A robot on the moon

Such knowledge-building AI systems already exist.

As a simple example, imagine a robot being sent to the moon to answer the question, “What does the lunar surface look like?”

The robot’s designers may give it a prior “belief” about what it will find, along with an indication of how much “trust” it should have in that assumption. The level of trust is just as important as belief because it is a measure of what the robot does not know.

The robot lands and is faced with a decision: which direction should it go?

Since the robot’s goal is to learn about the lunar surface as quickly as possible, it should move in the direction in which it learns best. This can be measured by how new knowledge reduces the robot’s uncertainty about the landscape—or how much it increases the robot’s confidence in its knowledge.

The robot walks to its new location, uses its sensors to record observations, and updates its belief and associated trust. In this way, it learns about the lunar surface in the most efficient way possible.

Robotic systems like this — known as “active SLAM” (Active Simultaneous Localization and Mapping) — were first proposed more than 20 years agoand they are still one active research area. This approach to continuously gathering knowledge and updating understanding is based on a statistical technique called Bayesian Optimization.

Mapping unknown landscapes

A decision maker in government or in industry faces more complexity than the robot on the moon, but the mindset is the same. Their duties include exploring and mapping unknown social or economic landscapes.

Suppose we want to develop policies to encourage all children to succeed in school and graduate from high school. We need a conceptual map of what actions, when, and under what conditions will help achieve these goals.

Using the principles of the robot, we formulate a first question: “Which intervention(s) help children the most?”

Next we create a draft conceptual map using the existing knowledge. We also need a measure of our confidence in that knowledge.

We then develop a model that incorporates various sources of information. These do not come from robotic sensors, but from communities, lived experiences and any useful information from recorded data.

Then, based on the analysis that takes into account the preferences of the community and stakeholders, we make a decision: “What actions should be implemented and under what conditions?”

Finally, we discuss, learn, update beliefs, and repeat the process.

Learn as we go

This is a “learning as we go” approach. As new information becomes available, new measures are selected to maximize some pre-established criteria.

AI can be useful in identifying the most valuable information using algorithms that quantify what we don’t know. Automated systems can also collect and store this information at a pace and in places that humans may find it difficult to do so.

AI systems like this apply the so-called Bayesian decision-theoretic framework. Their models are explainable and transparent and are based on explicit assumptions. They are mathematically rigorous and can offer guarantees.

They are designed to assess causal relationships and help deliver the best intervention at the best time. And they integrate human values ​​by being co-designed and implemented by affected communities.

We need to reform our laws and create new rules to govern the use of potentially dangerous AI systems. But it is just as important to first select the right tool for the task at hand.


Want to learn more about AI, chatbots, and the future of machine learning? Check out our full coverage artificial intelligenceor browse our guides The best free AI art generators And Everything we know about OpenAI’s ChatGPT.

Sally CrippsDirector of Technology UTS Human Technology Institute, Professor of Mathematics and Statistics, Technical University of Sydney; Alex Fischerhonorary member, Australian National University; Edward SantovProfessor and Co-Director, Human Technology Institute, Technical University of Sydney; Hadi Mohasel AfsharSenior Research Scientist, Technical University of SydneyAnd Nicholas DavisIndustry Professor of Emerging Technologies and Co-Director of the Human Technology Institute, Technical University of Sydney

This article was republished by The conversation under a Creative Commons license. read this original article.

Zack Zwiezen

Zack Zwiezen is a USTimesPost U.S. News Reporter based in London. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Zack Zwiezen joined USTimesPost in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing zackzwiezen@ustimespost.com.

Related Articles

Back to top button