AI Harmless

aiHarmless

Artificial Intelligence (AI)

Version 5.0

Friday September 29th

11:45 am

aiHarmless  — 9 November v 5.75 2 pm

ai Harmless

 Artificial Intelligence (AI) concerns. Potentially, supporting US Federal legislation advocating artificial intelligence guardrails to protect humanity globally.

Elon’s point of view is considered.

I might submit written comments to the United States Federal Government and other countries and groups. Additionally, I might testify in person, at public hearings on Capitol Hill (in the past I have testified as an expert witness). I have attended public hearings before on Capitol Hill, in Washington DC. I have submitted written comments before.

It’s important to consider influencing artificial intelligence US Federal Legislation. First step normally and could be is voluntary complaints.

The Philanthropists are wanted and needed to help make it better for the World to participate.

More to come…. complicated subject.

niche focus is related to AI learning (Tesla Bots, Optimus, video learning with AI). Artificial General Intelligence (AGI)

Ways to increase productivity.

AI polytechnic 

Polytechnic learning, support re-careering. 

Potential, minimum earnings programs.

Balancing US Federal Budget with Artificial Intelligence (AI) increased productivity.

Military branches and law enforcement

Offering not intended for military branches or law enforcement agencies. Coaching provided Individuals fully supported.

Disclaimer

Information provided on best effort basis not to be relied upon for decision making. All information subject to change without notice.

Forward-looking statements

Information provided contains a significant amount of forward looking statements that may or may not happen or be offered in the future information not to be relied upon as factual occurrences.

Open source non-proprietary

All information is open source and non-proprietary it can be used by anyone for useful purposes at any time.

Research:

TECH LEADERS AND CONGRESS DISCUSS AI REGULATION: WHAT HAPPENED?

September 18, 2023

by Glory Kaburu

CONTENT

Tech leaders and Congress unite on AI regulation, seeking the right balance between innovation and safety.

Bipartisan concerns raised over the closed-door format, as senators push for AI legislation.

Elon Musk calls for an AI “referee” to ensure AI actions are safe and in the public interest.

In a rare gathering of global technology leaders, some of the brightest minds in the tech industry convened in Washington, D.C., to discuss the future of artificial intelligence (AI) in the United States. This meeting, as reported by The New York Times on September 13, 2023, featured prominent figures like Elon Musk, Mark Zuckerberg, and Sam Altman, who engaged in public and private discussions with members of Congress. Unlike the usual antitrust hearings or investigations into data breaches, this meeting sought to explore the complex questions surrounding the regulation of AI.

The distinguished attendees

CNBC reported that the meeting saw the participation of top tech executives, including:

– Sam Altman, CEO of OpenAI

– Bill Gates, former CEO of Microsoft

– Jensen Huang, CEO of Nvidia

– Alex Karp, CEO of Palantir

– Arvind Krishna, CEO of IBM

– Elon Musk, CEO of Tesla and SpaceX

– Satya Nadella, CEO of Microsoft

– Sundar Pichai, CEO of Alphabet and Google

– Eric Schmidt, former CEO of Google

– Mark Zuckerberg, CEO of Meta

The closed-door meeting was attended by more than 60 senators, offering an environment conducive to open discussions without the usual constraints of a public hearing.

Key areas of discussion:

Sundar Pichai, CEO of Google, outlined four critical areas where Congress could play a pivotal role in AI development, according to his prepared remarks:

Supporting innovation:

 Crafting policies that foster innovation, including investments in research and development and immigration laws that attract talented AI professionals to the United States.

Government use of AI: 

Promoting the adoption of AI within government agencies to enhance efficiency and effectiveness.

Addressing significant challenges: Applying AI solutions to tackle pressing issues such as cancer detection and other major societal problems.

Workforce transition: 

Advancing an agenda for workforce transition that benefits all individuals, ensuring that AI-driven advancements do not leave anyone behind.

Bipartisan concerns

However, not all senators were in favor of this meeting format. Connecticut Democratic Senator Richard Blumenthal and Missouri Republican Senator Josh Hawley criticized the closed-door approach, expressing doubts about its effectiveness in addressing the societal risks associated with AI.

 They had recently introduced a legislative framework for AI regulation that includes the creation of an independent AI oversight body, a licensing regime for AI development, and the ability for individuals to sue companies over AI-related harms. They were adamant about moving forward with their proposed framework and potentially drafting a bill by the year’s end.

Blumenthal emphasized the need for AI safety regulation akin to the regulations governing airline safety, car safety, drug safety, and medical device safety. He argued that AI safety was equally important, if not more so, due to its potential impact.

A thoughtful conversation

New Jersey Democratic Senator Cory Booker described the discussions as a “thoughtful conversation.” 

He emphasized that all panel members believed in the government’s regulatory role in AI. Finding the right regulatory role was identified as a challenging task, crucial to safeguarding the nation and humanity from the risks posed by AI.

A call for a referee

Elon Musk, the CEO of Tesla, called for the appointment of a U.S. “referee” for artificial intelligence. 

He, along with Mark Zuckerberg and Sundar Pichai, met with lawmakers behind closed doors at Capitol Hill to discuss AI regulation. Musk likened the need for a regulator to the role of referees in sports, stating that such an entity would ensure that companies take actions that are safe and in the interest of the general public.

 Musk considered this meeting a “service to humanity” and suggested it might be a historically significant step in shaping the future of civilization.

Learning from china’s approach

While the United States grapples with the complexities of AI regulation, it’s worth noting that China has been proactive in enacting AI regulations over the past two years. These regulations, though differing in ideological content, offer valuable lessons in structuring AI governance. China has adopted a targeted and iterative approach, focusing on specific AI applications and gradually introducing regulations to address concerns. This approach allows for the development of policy tools and regulatory expertise over time

Notes:

Artificial Intelligence (AI) protection program 

aiHarmless.org domain (universal markup language URL) was acquired. Secured the aiHarmless.com extension with the aiHarmless.org extension. 

Planning on using the .org extension, because planning to have AI harmless operating as a nonprofit organization. (.org) plan to automatically forward online the .com inquiries to .org website.

www.aiharmless.org coming soon

Under Construction websites independently coming soon. In the interim content, as other content, publicly accessible, everything is open-source, non-proprietary.

Planning, with many others, on determining and implementing ways to protect Humanity from negativities of Artificial Intelligence (AI). 

US Federal Legislation, is expected to provide guardrails for Artificial Intelligence. Planning on active involvement in everything AI. Niche AI focuses: transportation and medicines. 

Additionally, evCharity domain. For supporting electric vehicle (EVs) for disadvantaged. The nonprofit organization planning to provide, mobility to individuals for many beneficial purposes. Transportation of people and products to benefit mankind globally.

www.evcharity.org coming soon 

Philanthropists, wanted and needed help make a better world and protect Humanity from negativities potentially of artificial intelligence.

Participation encouraged… join us…. help make a better world.

National Artificial Intelligence Initiative Office:

Located in the White House Office of Science and Technology Policy (OSTP), the National Artificial Intelligence Initiative Office (NAIIO) is legislated by the National Artificial Intelligence Initiative Act (DIVISION E, TITLE LI, SEC. 5102) to coordinate and support the National AI Initiative (NAII). The Director of the NAIIO is appointed by the Director of OSTP. The NAIIO is tasked to:

Provide technical and administrative support to the Select Committee on AI (the senior interagency committee that oversees the NAII) and the National AI Initiative Advisory Committee;

Oversee interagency coordination of the NAII;

Serve as the central point of contact for technical and programmatic information exchange on activities related to the AI Initiative across Federal departments and agencies, industry, academia, nonprofit organizations, professional societies, State and tribal governments, and others;

Conduct regular public outreach to diverse stakeholders; and

Promote access to technologies, innovations, best practices, and expertise derived from Initiative activities to agency missions and systems across the Federal government.

The NAIIO staff include employees on detail assignments from across the government.

Background

Article Elon AI (Time Magazine from Book): 

At a conference in 2012, Elon Musk met Demis Hassabis, the video-game designer and artificial–intelligence researcher who had co-founded a company named DeepMind that sought to design computers that could learn how to think like humans.

“Elon and I hit it off right away, and I went to visit him at his rocket factory,” Hassabis says. While sitting in the canteen overlooking the assembly lines, Musk explained that his reason for building rockets that could go to Mars was that it might be a way to preserve human consciousness in the event of a world war, asteroid strike, or civilization collapse.

 Hassabis told him to add another potential threat to the list: artificial intelligence. Machines could become superintelligent and surpass us mere mortals-perhaps even decide to dispose of us.

*********

At Musk’s 2013 birthday party in Napa Valley, California, they got into a passionate debate. Unless we built in safeguards, Musk argued, artificial-intelligence-systems might replace humans, making our species irrelevant or even extinct.

Page pushed back. Why would it matter, he asked, if machines someday surpassed humans in intelligence, even consciousness? It would simply be the next stage of evolution.

Human consciousness, Musk retorted, was a precious flicker of light in the universe, and we should not let it be extinguished. Page considered that sentimental nonsense. If consciousness could be replicated in a machine, why would that not be just as valuable? He accused Musk of being a “specist,” someone who was biased in favor of their own species. “Well, yes, I am pro-human,” Musk responded. “I f-cking like humanity, dude.”

The effort failed, and Google’s -acquisition of DeepMind was announced in January 2014. Page initially agreed to create a “safety council,” with Musk as a member. The first and only meeting was held at SpaceX. Page, Hassabis, and Google chair Eric Schmidt attended, along with Reid Hoffman and a few others. Musk concluded that the council was basically bullsh-t.

So Musk began hosting his own series of dinner discussions on ways to counter Google and promote AI safety. He even reached out to President Obama, who agreed to a one-on-one meeting in May 2015. Musk explained the risk and suggested that it be regulated. “Obama got it,” Musk says. “But I realized that it was not going to rise to the level of something that he would do anything about.”

Musk then turned to Sam Altman, a tightly bundled software entrepreneur, sports-car enthusiast, and survivalist who, behind his polished veneer, had a Musk-like intensity. At a small dinner in Palo Alto, they decided to co-found a nonprofit artificial-intelligence-research lab, which they named OpenAI. It would make its software open-source and try to counter Google’s growing dominance of the field. “We wanted to have something like a Linux version of AI that was not controlled by any one person or corporation,” Musk says.

One question they discussed at dinner was what would be safer: a small number of AI systems that were controlled by big corporations or a large number of independent systems? They concluded that a large number of competing systems, providing checks and balances on one another, was better. For Musk, this was the reason to make OpenAI truly open, so that lots of people could build systems based on its source code. 

Another way to assure AI safety, Musk felt, was to tie the bots closely to humans. They should be an extension of the will of individuals, rather than systems that could go rogue and develop their own goals and intentions. That would become one of the rationales for Neuralink, the company he would found to create chips that could connect human brains directly to computers.

????? Section missing ????

Musk’s determination to develop artificial-intelligence capabilities at his own companies caused a break with OpenAI in 2018. He tried to convince Altman that OpenAI should be folded into Tesla. The OpenAI team rejected that idea, and Altman stepped in as president of the lab, starting a for-profit arm that was able to raise equity funding, including a major investment from Microsoft.

So Musk decided to forge ahead with building rival AI teams to work on an array of related projects. These included Neuralink, which aims to plant microchips in human brains; Optimus, a human-like robot; and Dojo, a supercomputer that can use millions of videos to train an artificial neural network to simulate a human brain. It also spurred him to become obsessed with pushing to make Tesla cars self-driving. 

At first these endeavors were rather independent, but eventually Musk would tie them all together, along with a new company he founded called xAI, to pursue the goal of artificial general intelligence.

In March 2023, OpenAI released GPT-4 to the public. Google then released a rival chatbot named Bard. The stage was thus set for a competition between OpenAI-Microsoft and DeepMind-Google to create products that could chat with humans in a natural way

???? Missing ???? 

His compulsion to ride to the rescue kicked in. He was resentful that he had founded and funded OpenAI but was now left out of the fray. AI was the biggest storm brewing. And there was no one more attracted to a storm than Musk.

In February 2023, he invited—perhaps a better word is summoned—Sam Altman to meet with him at Twitter and asked him to bring the founding documents for OpenAI. Musk challenged him to justify how he could legally transform a nonprofit funded by donations into a for-profit that could make millions. Altman tried to show that it was all legitimate, and he insisted that he personally was not a shareholder or cashing in. He also offered Musk shares in the new company, which Musk declined.

Instead, Musk unleashed a barrage of attacks on OpenAI. Altman was pained. Unlike Musk, he is sensitive and nonconfrontational. He felt that Musk had not drilled down enough into the complexity of the issue of AI safety. However, he did feel that Musk’s criticisms came from a sincere concern. “He’s a jerk,” Altman told Kara Swisher. “He has a style that is not a style that I’d want to have for myself. But I think he does really care, and he is feeling very stressed about what the future’s going to look like for humanity.”

The fuel for AI is data. The new chatbots were being trained on massive amounts of information, such as billions of pages on the internet and other documents. Google and Microsoft, with their search engines and cloud services and access to emails, had huge gushers of data to help train these systems.

What could Musk bring to the party? One asset was the Twitter feed, which included more than a trillion tweets posted over the years, 500 million added each day. It was humanity’s hive mind, the world’s most timely dataset of real-life human conversations, news, interests, trends, arguments, and lingo. Plus it was a great training ground for a chatbot to test how real humans react to its responses. The value of this data feed was not something Musk considered when buying Twitter. “It was a side benefit, actually, that I realized only after the purchase,” he says.

Twitter had rather loosely permitted other companies to make use of this data stream. In January 2023, Musk convened a series of late-night meetings in his Twitter conference room to work out ways to charge for it. “It’s a monetization opportunity,” he told the engineers. It was also a way to restrict Google and Microsoft from using this data to improve their AI chatbots. He ignited a controversy in July when he decided to temporarily restrict the number of tweets a viewer could see per day; the goal was to prevent Google and Microsoft from “scraping” up millions of tweets to use as data to train their AI systems. 

Between Twitter’s data feed and the 160 billion frames per day of video that Tesla has received, Musk is sitting on a stockpile of information that could help create AI for physical robots, not just text-generating chatbots.

There was another data trove that Musk had: the 160 billion frames per day of video that Tesla received and processed from the cameras on its cars. This data was different from the text-based documents that informed chatbots. It was video data of humans navigating in real-world situations. It could help create AI for physical robots, not just text-generating chatbots.

The third goal that Musk gave the team was even grander. His over-riding mission had always been to assure that AI developed in a way that helped guarantee that human consciousness endured. That was best achieved, he thought, by creating a form of artificial general intelligence that could “reason” and “think” and pursue “truth” as its guiding principle. You should be able to give it big tasks, like “Build a better rocket engine.”

Someday, Musk hoped, it would be able to take on even grander and more existential questions. It would be “a maximum truth-seeking AI. It would care about understanding the universe, and that would probably lead it to want to preserve humanity, because we are an interesting part of the universe.” That sounded vaguely familiar, and then I realized why.

He was embarking on a mission similar to the one chronicled in the formative (perhaps too formative?) bible of his childhood years, the one that pulled him out of his adolescent existential depression, The Hitchhiker’s Guide to the Galaxy, which featured a super-computer designed to figure out “the Answer to The Ultimate Question of Life, the Universe, and Everything.”

Isaacson, former editor of TIME, is a professor of history at Tulane and the author of numerous acclaimed biographies. 

Reportly….Adapted from the book Elon Musk by Walter Isaacson, published by Simon & Schuster Inc.

Can Elon Musk really save the world?  article:

Can Elon Musk really save the world?
Fortune Magazine, by Peter Vanham

Earth faces an imminent threat, and unless we change our course, human life as we know it will become impossible here. What do we do?

Most would say we must do all we can to prevent that bleak scenario by limiting the harmful impact of human activities. “There is no planet B,” environmentalists point out.

Ask the billionaire Elon Musk, however, and you may get a very different reply. “We don’t want to be one of those single-planet species,” Musk said in 2021 at the launch of a SpaceX rocket into orbit. “We want to be a multi-planet species.” Musk added that he is “highly confident” that SpaceX will land humans on Mars in the near future.

It’s a mindset akin to that of the Krypton scientist Jor-El, father of DC Comics’ Superman: As your planet explodes, send your offspring away in a space pod to ensure the survival of your species elsewhere in the galaxy. And indeed, Musk doesn’t seem to have let go of his childhood fascination with dashing, planet-hopping superheroes.

Growing up in Pretoria, South Africa, “I read all the comics I could find, or that they let me read in the bookstore before chasing me away,” Musk has said in a documentary. In those tales, superheroes vanquished evil on Earth and beyond, and humans and other species faced off in faraway galaxies.

The Musk we know today—the bombastic entrepreneur, admired and abhorred around the world—has styled himself as the embodiment of these fictional characters: a man who believes in his ability to singlehandedly save humanity. His superhero philosophy puts him at odds with environmentalists, who believe in the power of the collective, rather than that of any individual “great man,” to solve humanity’s greatest challenges.

And it’s one clue to unpacking a fundamental conundrum about Elon Musk. Why is a person whose technology for electric vehicles, solar energy, and energy storage has done so much to advance the green transition so reviled in the environmentalist community? Why do climate activists abandon his social media platform and mock him at their protests? How green is Elon Musk, really? The answer requires a journey across time and into space.

The year of Musk’s birth, 1971, was a turning point for humanity. It was the first year humans lived beyond the planet’s biocapacity, scientists at the Global Footprint Network calculated decades later. In the U.S., awareness was rising about smog, acid rain, and the contamination of water supplies; in 1970 the first Earth Day was held, the Environmental Protection Agency was created, and the Clean Air Act was passed.

But environmental concerns wouldn’t have ranked high on the young Musk’s radar. As a boy, as Walter Isaacson recounted in his recent biography, Musk was sent by his father to several “survival camps” where he had to literally fight to get food, once losing 10 pounds. On another occasion, he was beaten so badly by bullies at school that he was hospitalized.

The boy Elon found an escape, and a source of inspiration, in the moral certainty of his favorite superheroes, he later said in an interview: “I mean, they’re always trying to save the world, with their underpants on the outside or these skintight iron suits.” He looked to science fiction for philosophical guidance as well. One of his favorite books growing up, he told Isaacson and others, was The Hitchhiker’s Guide to the Galaxy, published when he was eight. The satirical story follows the only man to survive the destruction of planet Earth and his travels in outer space. Among Musk’s takeaways from the book, he said in a CBS interview, is that “the universe is the answer.” (Emails to Tesla and to Musk seeking comment for this story were not returned.)

Confined for now to planet Earth, Musk was fascinated by the New World, and more specifically, the United States. It was the land his superheroes hailed from, and of the unbridled, free-spirited capitalism of President Ronald Reagan and the economist Milton Friedman. (Many years later, Musk would invoke Friedman in calling a Biden administration spending bill “trickery,” and echo Reagan, telling the telling the Wall Street Journal that “government should, I think, just try to get out of the way and not impede progress.”)

Musk ejected himself from the turmoil of apartheid-era South Africa at 17 to enroll at Queen’s University in Canada, where his mother’s relatives lived. He has said he made this decision in part to avoid compulsory service in the South African military. “Who wants to serve in a fascist army?” he said. As soon as he could, he transferred to the U.S. college system, pursuing degrees at the University of Pennsylvania and Wharton, and ending his academic career at Stanford, where he was admitted to a doctoral program but dropped out to pursue entrepreneurship.

Musk never lost his superhero mindset. As Sam Altman, the CEO of OpenAI, told Ronan Farrow in a recent New Yorker profile: “Elon desperately wants the world to be saved. But only if he can be the one to save it.”

Looking at his track record of developing technology for electricity and renewable energy, there’s certainly a case to be made that Musk has not just proved himself to be green; he’s super-green.

“Elon Musk will go down in the history books as having helped the transportation sector from fossil fuel to zero-emissions electrification,” said Margo Oge, a former director of the EPA’s Office of Transportation and Air Quality, who has at times been critical of Musk, in an interview with Fortune. “We would not find ourselves where we are today, with trillions of dollars invested in decarbonizing transport, if it wasn’t for him.”

transforming the energy economy. Despite Musk’s contributions, CO2 emissions globally are still rising at alarming rates. The exploitation and burning of fossil fuels continues apace. Temperatures have already risen about one degree Celsius above 20th-century averages and continue to rise. Unprecedented storms wreak havoc, and forests around the world burn.

Sure, Musk has boldly gone where no one has gone before, and several times he has snatched victory from the jaws of defeat. But that’s where the comparisons to Superman and Iron Man end. Superheroes can bend reality, time-travel, and use magic to transcend what’s physically possible. Elon Musk can’t.

Perhaps it is us, as a society, who need to let go of the fantasy of any one superhero or technological innovation changing the status quo on climate change. To address the crisis humanity faces, technological innovations are necessary, but not sufficient. Individual and collective choices, government policies, and international agreements make up the other pieces of the puzzle.

Individual humans have flaws, no matter how innovative or visionary they are. Musk is no exception to that rule. And even if he were a flawless superhero, we would need many more than one Superman to make a green transition happen. We’d need a superhero universe.

Or better still, an entire species, committed to that goal.

This article appears in the October/November 2023 issue of Fortune with the headline, “How green is Elon Musk, really?”

“Elon Musk will go down in the history books as having helped the transportation sector from fossil fuel to zero-emissions electrification,” said Margo Oge, a former director of the EPA’s Office of Transportation and Air Quality, who has at times been critical of Musk, in an interview with Fortune. “We would not find ourselves where we are today, with trillions of dollars invested in decarbonizing transport, if it wasn’t for him.”

transforming the energy economy. Despite Musk’s contributions, CO2 emissions globally are still rising at alarming rates. The exploitation and burning of fossil fuels continues apace. Temperatures have already risen about one degree Celsius above 20th-century averages and continue to rise. Unprecedented storms wreak havoc, and forests around the world burn.

Sure, Musk has boldly gone where no one has gone before, and several times he has snatched victory from the jaws of defeat. But that’s where the comparisons to Superman and Iron Man end. Superheroes can bend reality, time-travel, and use magic to transcend what’s physically possible. Elon Musk can’t.

Perhaps it is us, as a society, who need to let go of the fantasy of any one superhero or technological innovation changing the status quo on climate change. To address the crisis humanity faces, technological innovations are necessary, but not sufficient. Individual and collective choices, government policies, and international agreements make up the other pieces of the puzzle.

Individual humans have flaws, no matter how innovative or visionary they are. Musk is no exception to that rule. And even if he were a flawless superhero, we would need many more than one Superman to make a green transition happen. We’d need a superhero universe.

Or better still, an entire species, committed to that goal.

This article appears in the October/November 2023 issue of Fortune with the headline, “How green is Elon Musk, really?”