New laws to regulate AI would be premature

Market experimentation has the highest return now, when we are debating the best and most appropriate uses for the technology.

| Updated on: Oct 30 2023, 18:25 IST
Sunspot AR3474 to spark geomagnetic storm today! Check what NASA revealed
1/5 The Earth is always at risk from various cosmological objects including asteroids, comets and even geomagnetic storms. Now, there is a chance that the volatile Sun is going to spew out a huge amount of energy and if that is directed at the Earth, it will have various consequences. Concern has been raised by NASA after the Solar Dynamics Observatory (SDO) revealed that the Sun might potentially release solar flares which can ultimately lead to a geomagnetic storm today. (Unsplash)
2/5 NASA has released this information after observing a sunspot called Sunspot AR3474. This sunspot has significantly increased in size and this can cause the release of C-class solar flares. These flares are said to be less harmful than M-class solar flares. However, they can still spark a geomagnetic storm on Earth and lead to radio disturbances as well as disrupt satellite activity, and even cause GPS to malfunction. (Pixabay)
3/5 This potential threat of geomagnetic storms is rising amidst the solar cycle 25. This solar cycle seems to be more active than it was expected. Various sunspots, indicating the rise in solar activity have been observed.  Apart from this, NASA has also informed about a coronal hole from which solar winds may reach Earth on October 30 and can cause geomagnetic storms.  (Pixabay)
4/5 The Solar and Heliospheric Observatory (SOHO), has been working on solar research since 1995. This mission has completed 25 years of service. SOHO's primary objective is to study solar activities deeply, observing the sun’s inner core, outer corona, and solar winds. With the help of captured corona images, SOHO observes solar surface velocity and magnetic fields to take preventative measures from geomagnetic storms. (Pixabay)
image caption
5/5 Geomagnetic storms are the inevitable threat to our planet and its technological infrastructure. Geomagnetic storms can cause power and radio blackouts, the emergence of stunning auroras, radiation exposure for passengers on airplanes, and more. The worst they can do is to pass through integrated circuits, causing damage and altering data stored in the memory of an electronic device. (Pixabay)
icon View all Images
The "AI Safety Summit" is advertised at Bletchley Park, near Milton Keynes, north of London on October 26, 2023. (AFP)

All of a sudden there is a flurry of activity around artificial intelligence policy. President Joe Biden is scheduled to issue an executive order on the topic today. An AI safety summit is being held in the UK later this week. And last week, the US Senate held a closed-door forum on research and development in AI.

I spoke at the Senate forum, convened by Majority Leader Chuck Schumer. Here's an outline of what I told the panel about how the US can boost progress in AI and improve its national security.

First, the US should allow in many more high-skilled foreign citizens, most of all those who work in AI and related fields. As you might expect, many of the key contributors to AI progress —  such as Geoffrey Hinton (British-Canadian) and Mira Murati (Albanian) — come from abroad. Perhaps the US will never be able to compete with China when it comes to assembling raw computing power, but many of the world's best and brightest would prefer to live in America. The government should make their path as easy as possible.

Artificial intelligence also means that science probably is going to move faster in the future. That applies not only to AI itself, but also to the sciences and practices that will benefit, such as computational biology and green energy. The US cannot afford the luxury of its current slow procurement and funding cycles. Biomedical science funding should be more like the nimble National Science Foundation and less like the bureaucratic National Institutes of Health. Better yet, Darpa models could be applied more broadly to give program managers greater authority to take risks with their grants.

Those changes would make it more likely that new and forthcoming AI tools will translate into better outcomes for ordinary Americans.

The US should also speed up permitting reform. Construction of more and better semiconductor plants is a priority, both for national security and for AI progress more generally, as recognized by the CHIPS Act. Yet the need for multiple layers of permits and environmental review slows down this process and raises costs. There is a general recognition that permitting reform is needed, but it hasn't happened.

As the rate of scientific progress increases, regulation may need to adapt. Many critics have charged that FDA approval processes are too slow and conservative. That problem could become much worse if the number of new candidate drugs were to increase by two or three times. It is unrealistic to expect the government to become as fast as the AIs, but it can certainly be faster than it is now.

What about the need for more regulation?

In the short run, the US can beef up, reform and reconsider what is sometimes called “modular regulation.” If an AI were to issue health or diagnostic advice, for example, it would be covered by current regulatory bodies — federal, state and local. At all levels, those institutions need to make significant changes. Sometimes that will involve more regulation and sometimes less, but now is the time to start those reappraisals.

What if an AI gives diagnostic advice that is better than that of human doctors — but is still not perfect? Should the AI company be subject to medical malpractice law? I would prefer a “user beware” approach, as currently exists for googling medical advice. But obviously this issue requires deeper consideration. The same concern applies to AI legal advice: Plenty of current laws apply, but they need to be revised to match new technologies.

The US should not, at the moment, regulate or license AI services as entities unto themselves. Obviously current AI services fall under extant laws, including laws against violence and fraud.

Over time, I am confident that people will figure out what exactly AIs, including large language models, are best used for. Industry structure may become relatively stable, and risks will be better known. It will be clear whether the American AI service providers have kept their leads over China's.

At that point — but not until then — the US might consider more general regulations for AI. Market experimentation has the highest return now, when we are debating the best and most appropriate use cases for AI. It is unrealistic to expect bureaucrats, few of whom have any AI expertise, to figure out answers to these questions.

In the meantime, it does not work to work license AIs on the condition that they prove they will not cause any harm, or are very unlikely to. The technology is very general, its future uses are hard to predict, and some harms could be the fault of the users, not the company behind the service. In similar fashion, it would not have been wise to make similar demands of the printing press, or of automation, in their early days. And licensing regimes have an unfortunate tendency to devolve into bureaucratic or political squabbling.

In any case: The time to act is now. The US needs to get on with it.

Follow HT Tech for the latest tech news and reviews , also keep up with us on Whatsapp channel,Twitter, Facebook, Google News, and Instagram. For our latest videos, subscribe to our YouTube channel.

First Published Date: 30 Oct, 18:25 IST