Saar Barhoom is the SVP R&D Veeva Crossix, a technology platform built for the development and delivery of large-scale patient data and analytics. He has joined CTech to share a review of “Life 3.0: Being Human in the Age of Artificial Intelligence” by Max Tegmark.
Title: “Life 3.0: Being Human in the Age of Artificial Intelligence”
Author: Max Tegmark
Format: Tablet
Where: Home
AI has lately come under the spotlight with ChatGPT and similar services that offer the possibility of chatting freely with a bot and getting easier access to enormous masses of knowledge. This sparks people’s imagination and creates lots of buzz about the related potential which is indeed vast. But AGI – Artificial General Intelligence – is not just about chatting to a bot or building smart robots, it is so much more than that.
This book, published back in 2017, unveils both the huge potential and risk related to Artificial General Intelligence as it is expected to dramatically increase its capabilities over the years. Unlike previous technological achievements that were typically limited to specific domains, limited in capabilities, and well controlled by humans, AGI is relevant everywhere, and is expected to be capable of improving itself – which is the essence of what the author calls “Life 3.0” – and a scenario where it gets out of control is distant from science fiction. The author claims that Life 3.0 is on its way and when it is here, things will rapidly evolve and nothing will be the same. Hence careful thinking is required to agree on how this change should be handled in order to ensure things evolve the way we want them to. It is amazing to see how over six years since the book was published, we are indeed already witnessing new developments in the AI field which follow some of the book’s predictions.
The author first explains the concept of the three phases of life, from its phase where learning was slow and capabilities were improving via DNA evolution passed to the next generation (Life 1.0), then faster learning of skills and behaviors during the lifetime of an entity (Life 2.0) and lastly, Life 3.0 – where an entity will be able to redesign itself and so exponentially extend its capabilities. It discusses the term intelligence – the capability to learn – which is basically dependent on two pillars, available memory and computation power. These basic terms are all about information management and are independent of the substrate used for their implementation. This independence means there is immense room for improvement as we’ve seen over the years with information technologies reaching ever higher peaks.
Next, the book discusses the near future. What can be achieved with the near-term expected AI becoming stronger, which is currently mainly based on deep reinforcement learning methods applied in specific fields, and what we should expect and consider?
AI can promote fast progress in areas like healthcare. It can speed up drug development, speed up the diagnosis of complicated cases, promote surgeries using robots, and more. Of course, that requires access to high-quality, big, and anonymized training data sets to allow required patient data privacy; enabling access to such data sets is something that I am happy to note that our development center takes part in.
In addition, it can promote progress in transportation, manufacturing, energy, law, military, and more, changing dramatically the employment market. With a total memory of the entire law system, regulations, and historical legal cases it could become an efficient, tireless, and unbiased judging authority. On the military side, it could become a massive deterrent measure, can improve defense systems, or maybe change wars so there would not be any need to involve humans.
A poor implementation can be very hazardous. Throughout history, as part of normal technological progress, people always learned from their mistakes. However, the cost of mistakes is going to rise when technology is more powerful and used to control not only limited systems but also power grids, stock markets, and nuclear weapons. This means we need to change our approach and become more proactive rather than reactive and AI safety should become a domain of itself.
In the longer term, the author explains what an Intelligence Explosion is and what might be its outcomes. Basically, once AI becomes capable of redesigning itself, its improvement rate will become exponential, in the spirit of Moore’s law, limited only by physical factors, and can get very fast out of control. It will become super intelligent compared to humans and keep improving until it plateaus at a level constrained by physical factors. At some point along that way when AGI is more powerful than people, we might find ourselves in one of a variety of end states. Longer term, if indeed a super intelligence gets to its limit, consequences might be well outside the scope of our planet with wide implications. This part of the book is also fascinating but becomes more theoretical, and I found it less relevant to the current discussion. The last part of the book discusses consciousness – suggests a broad definition of it as a subjective experience and discusses the implications of AGI becoming conscious at some point. This is important since morale cares for conscious creatures so if AGI is conscious – it should be treated not as a tool.
First, replication of information and being goal driven are the basic building blocks for life. Memory and computation are the basis of any intelligence and are inherently substrate-independent. There’s a good chance there is a point where a machine initially built by humans could be considered alive and more intelligent. We should think if a point where AGI systems come up with their own goals is a desired state. Furthermore, learning is based on memory and computation. History shows us we are able to dramatically progress on these computer capabilities by replacing how computers are built. One of the first uses of AGI is expected to be the redesign and improvement of the AGI itself. The physical limit for a computation that a piece of material can do is now considered as 10^33 times more than today’s state, which will take us 200 years to get to if we keep doubling every 2 years. An intelligence explosion where this process gets out of control and keeps us far behind superintelligent entities without an option to recover is a valid possibility. If we don’t think about what we want, we will probably get something that we don’t want.
Ensuring that the goals of AGI systems are aligned with our goals is not easy, since intelligent systems derive sub-goals that might be hard to consider. Some think that there will be a point where AGI systems will be conscious. When this happens, AI systems should be viewed as a natural evolution of life and we should accept that even it means the end of mankind.
Our laws are very far behind this new world. As many jobs might become redundant, we should think about how to ensure people keep both the source and purpose of living.
Considering the speed at which AI technologies evolve, current standards for controlling it put humanity at the moment at an unsustainable course. Once real-world systems are connected to AI we must ensure it is verified, validated, secured, and controlled tightly to avoid catastrophic results. A high-level conversation about the potential of and the regulation required is already underway, part of driven by the Future of Life Institute.
I enjoyed the book a lot since the possible future outlines are so different from what we have today, yet the book is capable of explaining them and looking at them as completely feasible rather than science fiction. I was impressed with the fact the book already predicted some developments over the last years since it was published. Although parts of it are theoretical and remote, I think it does provoke serious thinking and discussion of AI which is extremely important. I think that the high stakes call for a wider understanding of that subject rather than keeping it in the hands of a handful of people to make the calls for us with the general public mostly unaware of the possibilities.
Who should read this book:
Techies and science fiction lovers are obviously going to love this book but I’d like to recommend it first and foremost to decision-makers in multiple areas where AGI is expected to be leveraged as they would benefit from understanding potential risks. However, I think that many others can benefit from reading the book as well since it opens the door to understanding better AGI and joining a discussion of our approach where people will be asking informed questions and considering potential outcomes. Improving the public discussion on that important topic can hopefully positively impact the end result.