We may be in it?

Introduction

This post is about the following paper: MoSSAIC: AI Safety After Mechanism, where MoSSAIC stands for: Management of substrate-sensitive AI capabilities or Management of AI risks in a substrate-sensitive manner.

Nevertheless, this is neither a summary nor a review. As I've already written about it, though interesting, I believe these types of exercises to be of limited value from a learning perspective.

The goal of this post is to draw attention to current AI development systems and thus the potential of frontier AI systems.

Core of my Argument and the Current AI landscape

I felt compelled to write this because of previous research papers that I have been through. One of the main points in research is not only to do Science but to think about the Science we do. I have seen a number of papers (top-level) that make claims about the whereabouts of AI systems in a rather careless manner or, should I say, without taking a distance from their own research.

The matter I am referring to is in substance the following: perform some (great) research then say, well you know, these systems aren't that good because of this or that. Now, the crucial point here is that we are currently benchmarking or comparing these AI systems with the greatest minds of humankind. This lack of perspective or short memory is of utmost concern. That is because if you cannot correctly or at least moderately assess the capabilities of current AI systems, then how would you be able to assess/weigh their impact on human societies?! For a quick reminder of where we come from, I refer you to the YouTube link below.

Echoing the previous point, it is my reading that the MoSSAIC paper captures some of the current reality. Mainly, it is argued that the depicted threat model--where the main issue concerns the flexibility of intelligent systems with respect to the substrates--could be mitigated by keeping up the pace with intelligent systems and deploying more substrate-flexible solutions earlier.

For the previous to take place, the assumption is made that we are in the "middle period." That is, roughly speaking, where AI has become a commodity akin to water or electricity. From data centers, servers, and personal computers to edge computing, IoT, sensors, and whatnot, some of the substrates the paper talks about are "being filled." A possible example could be the so-called intelligent charging plugs.

Another assumption the paper makes concerns the wide availability of mathematical copilots/agents that everyone could have. This point can be illustrated by the increase of math/formal-based initiatives/challenges/competitions.

In closing, when you're on a high-speed train and you talk to one another, you may not "see it," but you're actually moving really fast. It helps to look outside or to look from above (for those who can). Bottom line: we may very well be in it.

Some Technical Elements

Here are some additional interesting aspects:

  • The paper also mentions the potential limitations of MI (which I agree with). Although, MI is not necessary for AI safety, it is a very interesting tool to have.
  • The live theory they develop argues that there is a paradigmatic shift from formal theories that we operate with to "informal theory-prompts". To this point and in conjunction with the assumption of mathematical agents, I would like to add the following: we may observe (if it isn't already the case) a new form of "folk theorems." That is, the generation of a new corpus of knowledge that is machine-induced and linked to the mechanization of mathematics. There is tangential research about this type of stuff, i.e., is natural language the only option?

Conclusion

The MoSSAIC paper captures a number of different perspectives about AI safety. So much so that I am tempted to say that it's almost a "holistic approach" to the subject.

The main point I wanted to outline was the lack of awareness about current AI developments.

If you care about AI safety, consider supporting this inquiry.

What Did Ilya See?

On Folk Theorems