HUMAN HYBRID CONSCIOUSNESS -  POPULAR MECHANICS 11 JULY 2024

 

 

SEE JOHN STORM'S NAVIGATOR OR RETURN TO BASE

 

 

 

 

 

 

John Storm's BCI is more discreet, being a digital biological implant: BioCore™

 

 

 

In this series of fictional adventures, John Storm is enhanced by virtue of a digital interface affixed to his brain in the course of the Cleopatra Reborn thriller. The interface is called BioCore™. It allows the ocean conservationist to communicate wirelessly with an extremely powerful portable supercomputer using nano technology, called CyberCore Genetica™.

 

This allows John's conscious brain to utilize the computing power and AI of Hal, and any other communication device. So enhancing his abilities. While also allowing the Artificial Intelligence to learn from Commander Storm. Much as chatbots learn from the web. Thus robots can learn that it is wrong to kill people, rob them, or tell lies to get elected, or not allow an electorate to know the truth, via propaganda censorship. Or not observe basic human rights. Such as the right to receive and impart information - free of state interference.

 

We imagine that this might be the death knell for corrupt politicians, who reply on disinformation to get elected. Making promises that cannot possibly be fulfilled. About economic growth, on a planet that we are already burning up using natural resources at the rate of 2.4 times that which is sustainable. The truth of which is revealed as never ending borrowing and increased debt, to cover up inefficient administrations. Or, unlawful aggression to fuel military growth, more nuclear missiles, warships and tanks. Despite the un-sustainability of such notions.

 

An AI enhanced brain, even of a politician programmed to lie from the moment they enter the public arena, could be bound to reveal the truth, even it is tailored to becoming a senator or president. Except, with a higher level of morality built in.

 

 

 

 

 

 

He looks just like any other soldier in action. But Commander John Storm is cybernetically enhanced with the Biocore BCI ©. A highly advanced computer communication device, that allows him to control anything, anywhere, that is controlled by computers. Such as Alexa, most traffic systems (Die Hard 4), etc. But worst of all for rogue nations; military networks. The more advanced they are, the more vulnerable. Fortunately, John is a moral man, with a high code of conduct. Way above most civil and military regimes. Hence, if you are the good guys, you are safe. If you step out of line. Then, watch out - criminals, terrorists & warmongers. And he does not need a gun.

 

 

 

 

POPULAR MECHANICS 11 JULY 2024 - HUMANS COULD FORGE A HYBRID CONSCIOUSNESS BY MERGING WITH ARTIFICIAL LIFE OXFORD SCIENTISTS SAY

At the turn of the 8th century B.C.E., ancient Greek poet Hesiod wrote a curious tale about a strange robot named Talos. Forged by the godly blacksmith Haephestus and infused with ichor - the same mysterious life force that flowed through the veins of the gods themselves - Talos marked the first-ever description of an artificial lifeform, and the beginning of humanity’s long obsession with artificial consciousness.

Fast-forward 2,700 years, and like a modern-day Hephaestus, companies like Microsoft, OpenAI, and Anthropic are imbuing their own artificial creations with the ichor of human creativity, reasoning, and data … lots and lots of data. Although Hesiod’s 100-foot-tall, brass-clad creation only plays a minor role in the supernatural soap opera of Greek mythology, AI researchers are beginning to use words like “synthesis,” “merger,” or even “evolution” to describe humanity’s future relationship with artificial life.

Suddenly, a hybrid consciousness that felt so decidedly sci-fi (or ancient Greek depending on your point of view) now seems inevitable. While the future holds many permutations of this merger of consciousness, humanity appears to have reached a momentous fork in the road: will the rise of a hybrid consciousness bring about a technological utopia or a Terminator-style apocalypse?

Oxford philosopher Nick Bostrom, Ph.D., has spent more than a decade pondering both of these possibilities, and while the future is uncertain, Bostrom says that some sort of human-machine hybrid consciousness is likely inevitable.

“It would be sad to me if like, in a million years, we still have the current version of humanity … at some point you may want to upgrade, then you can imagine uploading or biologically enhancing yourself or all kinds of things,” he says.

But such an “upgrade” can come with some serious, potentially world-ending consequences, a series of apocalyptic futures Bostrom previously explored in his 2014 book Superintelligence. Yet his latest book, Deep Utopia, argues the other side. It contends that a hybrid existence could create a “solved world,” one free of the everyday drudgery that fills our lives today.

Whether humanity is fighting against Skynet or exploring the galaxy in a kind of Star Trek-ian paradise still boils down to one question: Will AI ever become conscious?

THE DEFINITION OF CONSCIOUSNESS is a notoriously slippery concept that philosophers, scientists—and now AI engineers—have debated for centuries. Alan Turing’s famous test measured the intelligence of a system, or at least its ability to trick humans into thinking it was intelligent. However, experts have since argued that such a test only really examines a very small piece of the consciousness puzzle.

More complicated hypotheses, such as Global Workspace Theory, Integrated Information Theory, High Order Representation Theory, Attention Schema Theory, and others all point to certain ways someone or something could be regarded as conscious. Bostrom argues that consciousness isn’t a black-and-white matter, like flipping a switch. Instead, the process of gaining consciousness is a gradual and oftentimes murky journey whose progress is difficult to ascertain.

“We don’t have very clear criteria for what makes a computational system conscious or not,” Bostrom says. “If you just take off the shelf the best theories we have on consciousness … it’s not a ridiculous idea by any means that there could be some forms of consciousness now or in the very near term emerging from these [AI] systems.”

Until recently most AI programmers didn’t really wrestle with these deep philosophical theories - they just wanted to make sure their large language models (LLMs) weren’t accidentally racist. But in the past couple years, engineers from both Google and OpenAI have questioned - controversially - whether these programs are actually conscious.

Bostrom argues that the advent of AI that convincingly speaks like a human, including platforms like Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini, is a big component fueling current consciousness claims. After all, when you speak to another person, you assume they’re conscious - and convincing digital minds appear to trigger a similar reaction.

This kind of makeshift consciousness has been the object of some obsession for Oxford’s Marcus du Sautoy, Ph.D. As the university’s professor for the public understanding of science, Sautoy has given talks and even written a book exploring the idea that AI could possibly be creative. However, it’s Sautoy’s background in mathematics that has centered his understanding of hybrid consciousness today.

“AI is just code, code is just algorithms, and algorithms are just math, so we are creating something mathematical in nature,” Sautoy says. “People get very mystical about what the brain does … but [the brain] also goes through some sort of algorithmic process.” The brain and artificial algorithms (a.k.a. a set of rules and calculations) are similar in many ways - both create types of synaptic connections, for example - but the human brain still remains superior when creating new synaptic connections. In other words, it’s better at “learning.”

If digital minds were to gain some degree of consciousness, even in a rudimentary rat-in-a-science-experiment sort of way, we would likely owe the machines some sort of legal protection and moral courtesy. Bostrom says, for example, that engineers could direct an artificial consciousness’ overall personality, including taking into account its well-being. This “state-of-mind” programming would ensure that the digital mind feels happy and eager to take on the day’s tasks.

WHILE POPULAR BOOKS AND FILMS have explored the myriad ways the rise of a digital consciousness could end us all (an outcome Bostrom says is still very much on the table) it’s also possible that a human-digital hybrid leads the way to a utopian future. In 2017, OpenAI CEO Sam Altman described this idea as “The Merge,” a future in which humanity peacefully co-exists with its digital creation. Compared to other disastrous outcomes—Bostrum posited a future in which AI accidentally transforms the world into paper clips—the idea of a Merge is “probably our best case scenario,” Altman wrote.

Billionaires like Elon Musk have taken this idea of The Merge quite literally, and invested billions to form companies like Neuralink whose aim is to physically connect biological components with mechanical ones. Neuralink’s first human clinical trials, known as Precise Robotically Implanted Brain-Computer Interface (PRIME), aims to help interpret neural activity for patients living with ALS, a neurological disease that destroys motor nerve function to experience “the joy of connecting with loved ones, browsing the web, or even playing games using only your thoughts,” according to a Neuralink promotional video.

However, Altman’s vision is more of a “soft” synthesis, one that began with the invention of the internet. It gained steam with the arrival of the smartphone, really took off during the social media era, and has finally brought us to this perplexing technological moment in time.

WITH THE IDEA OF CONSCIOUS DIGITAL MINDS on the horizon, it could be that humans are the proverbial frog in a boiling pot of water, and it’s only been a few years since things have started to feel a bit steamy. But as Bostrom argues in his book, this might be a boiling pot we don’t want to jump out of.

“We need to rethink what it means to be human in such a world where AI has taken care of all the practical tasks and we have a kind of a solved world,” Bostrom says. “You might have a much more radical form of automation … where maybe working for money at all becomes completely unnecessary because AI and robots can do everything better than we can do.”

But Bostrom says this is really only one layer of the philosophical onion.

“In the deeper layers, you realize that not just our economic efforts become obsolete but our instrumental efforts also,” Bostrom says. “You could still do these activities but they would be pointless in this condition … so what do we believe has value for its own sake and not as a means of achieving various other things?”

Sautoy also mentions that such a merger could be complicated by the fact that however digital consciousness does arise, it will most definitely be different from our own. That could lead to scenarios where our creations have no interest in having a biological buddy in the first place.

“The speed that AI will operate is not limited by embodiment,” Sautoy says. “Its pace of life will be so different to ours that maybe AI will look at us … the same way we look at a mountain.” While human lives rarely stretch beyond a century, mountains exist over millions of years. Similarly, the fast-paced, data-crunching “life” of AI could be equally unfathomable to our comparatively slow-paced lives.

Whether wired up like cyborgs or slowly erasing the boundary between our physical and digital lives, it’s hard to divine how humans will ultimately merge with their artificial creation - but Sautoy believes the risk is worth taking.

“I think that we are headed toward a hybrid future,” Sautoy says. “We still believe that we are the only beings with a high level of consciousness. … This is part of the whole Copernican journey that we are not unique. We’re not at the center.”

And for Bostrom, the journey will question the very definition of what it means to be human.

“Yes, we want humanity to survive and prosper, but what exactly do we mean by humanity?” Bostrom says. “Does it have to have two legs, two arms, and two eyes, and die after 70 years, or maybe that’s not the real essence of humanity. You could imagine all of those things changing quite a lot … perhaps humankind will grow into something much bigger.”

 

 

 

 

 

John Storm can control his ship, by thinking commands, thanks to his brain implant, Hal, the onboard AI and Captain Nemo, the autonomous navigation system. Completing a cybernetic control system. Incorporation Commander Storm as the human component of the system.

 


 

 

 

This is not science-fiction, it is fact based. Based on technology that is available, should it be developed in the way suggested. Hence, this is science-faction. There is nothing to prevent any scientist from perfecting the CyberCore™ technology. There is nothing to prevent scientists or entrepreneurs from developing a super fast nano computer. And, the moment that happens we have a two way stream where a computer can share the conscience of a human, and a human can become imbued with the computing power of a mainframe, coupled with unlimited internet knowledge. Thus, the web is a component of what could be a giant cybernetic organism. With the human at the end of the chain, as an end effecter. A means to harness and focus all that knowledge for good.

 

 

 

John Storm sometimes ponders moral issues aloud in front of Hal, even though he can communicated from the other side of the planet, telepathically. It's a human thing. Even for a cyborg

 

 

Being a computer AI enhanced cyborg, carries with it a greater responsibility to do the right thing. John Storm is a moral person, guided by his belief in always acting correctly in any given situation. For example, his conviction to help save the planet from kleptocratic politicians, who constantly seek to empire build at the expense of global warming. Mainly because, they don't understand the natural world and the delicate balance that keeps earth safe, and provides a home for all life. One of John's missions is to tackle corruption and profiteering, one of the causes of unsustainably high carbon footprints.

 

Both Dan and Hal understand the need for a moral compass for the computer BioCore BCI enhanced ‘John Storm’. This presents the duo with complex and philosophical issues that may not have a definitive answer in the digital domain. Where computers are not living and breathing biological organisms. And beliefs are those of millions of people, all slightly varied, tailored to their local circumstances.

A moral compass is a set of principles or values that guide one’s actions and decisions based on what is right and wrong. A code of conduct is a set of rules or standards that define the expected behavior and responsibilities of a person or a group.

A computer BioCore BCI enhanced ‘John Storm’ might need a second opinion. A moral compass or a code of conduct to help him navigate the complex ethical challenges and dilemmas that he might face as a cyborg. For example, he might have to balance his own interests and goals with those of others, such as his crew, his enemies, or the environment. He might also have to deal with the potential conflicts or contradictions between his human and artificial components, such as his emotions, his logic, his autonomy, or his loyalties.

Dan and Hal agree that one possible way to provide a moral compass or a code of conduct for a computer BioCore BCI enhanced ‘John Storm’ is to use a framework or a model that is based on some ethical theories or principles. For instance, some of the common ethical theories or principles are:

- Utilitarianism: The moral action is the one that maximizes the overall happiness or well-being of the greatest number of people.

- Deontology: The moral action is the one that follows a set of universal and categorical rules or duties, regardless of the consequences.

- Virtue ethics: The moral action is the one that reflects the character and the virtues of a good person, such as courage, honesty, or wisdom.

- Care ethics: The moral action is the one that expresses care and compassion for others, especially those who are vulnerable or dependent.

- Social contract theory: The moral action is the one that respects the agreements or the norms that are established by a society or a community.

These are just some examples of how a moral compass or a code of conduct is designed and implemented for the super-computer BioCore BCI enhanced ‘John Storm’. There may be other factors and considerations that could affect the suitability or the effectiveness of such a system, such as the context, the situation, the stakeholder, or the outcome. This is John's moral dilemma. Which is why to final decision on what to do includes
his human thinking.

 

 
 

 

  POPULAR MECHANICS 11 JULY 2024 - CYBORGS AND CYBERNETICALLY ENHANCED BIOLOGICAL ORGANISMS, (BICs) BRAIN IMPLANTED COMPUTERS - THE ADVENTURES OF JOHN STORM

 

This website is Copyright © 2024 Jameson Hunter Literary Agency