It sounds like something out of George Orwell’s book 1984, yet scientists have touted it as a major advancement in the field of medicine. Have you heard of the ‘electronic tattoo’ fully equipped with the ability to track patients’ vital signs and report the findings to researchers? The technology is known as an epidermal electronic system (EES), and was developed by an international team of researchers from the United States, China and Singapore.
When it comes to microchips, implants, or electronic tattoos—it all sounds a little too futuristic, like these “advances” may be paving the way for government tracking of citizens, or worse. But some industries are promoting these high-tech ideas as major advancements in their field. The medical field is just one place these ideas are gaining a foothold.
According to the International Business Times, hospitals and doctors’ offices may one day soon outfit their patients with temporary electronic tattoos. These little skin-patches are said to carry a wealth of information in a tiny space and can reportedly help reduce medical errors while improving care.
“Our goal was to develop an electronic technology that could integrate with the skin in a way that is mechanically and physiologically invisible to the user,” says John Rogers with the University of Illinois at Urbana-Champaign. So, invisible electronic tattoos are good?
“It’s a technology that blurs the distinction between electronics and biology,” says Rogers in characterizing the patches that allow researchers to track vital signs and more.
The devices are small and “nearly weightless”. They are thinner than a human hair and attach to the body using water rather than an adhesive. In that regard, they are very much like temporary tattoos. But unlike the bubblegum tattoos, these have electronic components.
Researchers believe they will one day be used in hospitals worldwide. And given the fact that even our animals are implanted with a microchip, the researchers may be right. It isn’t as difficult it seems to convince the people what’s ‘right’ for them.
Rather than hooking someone up to a wide range of wires and adhesives, the small patches will give medical professionals all of the vital information they need. In addition, the researchers are working on variations that are voice activated, allowing wearers to operate a voice-activated video game with more than 90-percent accuracy.
This aspect could lead you to wonder, if they are able to be voice operated by the wearer, couldn’t they be voice-operated by another controller?
“This type of device might provide utility for those who suffer from certain diseases of the larynx. It could also form the basis of a sub-vocal communication capability, suitable for covert or other uses,” said Rogers.
The scientists are now working on a better adhesive, without sacrificing the ease of the water application. Their concern is that skin regenerates itself, and a new tattoo would need to be reapplied so as not to lose the old one with soughed away dead skin cells.
It seems that this ‘advancement’ has just brought us closer to merging man with machine. Soon, some of us may even be grasping immortality.
Maybe We’re Making It Too Easy For The Machines To
Machines that can think for themselves attached to a global brain with the ability to self replicate? Yeah, we’re making that happen.
This article is part of ReadWrite Future Tech, an annual series in which we explore how technologies that will shape our lives in the years to come are grounded in the innovation and research of today.
We have seen the future, and it’s starting to look a lot like Skynet.
That self-aware computer system—yes, the one that tries to exterminate the human race in the Terminator movies (and one TV show)—is a potent symbol of Frankensteinian hubris. It is mirrored in the Singularity, the idea that technological progress will soon hit exponential growth, leading to self-aware robots and artificial intelligence that seize control of their own destiny, rendering humans irrelevant if not extinct. (Unless people go transhuman first, although that’s another article entirely.)
The Singularity may never happen. Artificial intelligence—long predicted, never realized—may be much harder to achieve than we think. An emerging computer consciousness might pass through a period of infancy, during which humanity might be able to take countermeasures of one sort or another. Self-aware robots might turn out to be benevolent, or even completely uninterested in humanity. It’s impossible to predict.
Here, we’ll just assume the worst comes to pass. And this scenario is based on technologies that we’re feverishly developing today.
Creating The Tools Of Our Demise
What if computer code could write itself? What if robots could think for themselves and continuously learn from their environment while being fed contextual information from a vast global network of data? What if the machines could build themselves and propagate, much in the same way that mammals give birth to new mammals?
Scientists are already researching computer chips and networks that act like the human brain. These chips could allow computers to learn and act on their own in ways that we never thought possible. I saw researchers demonstrate a simple robot with one of these chips that was given an order to stand up. It squirmed, it stumbled … and it stood, having learned that behavior on its own.
We may look back one day and see this as the first step towards our doom. Matt Grob, executive vice president of Qualcomm Technologies, wondered whether it was ethical to turn the robot off after having imbued it with a certain degree of sentience.
Computers and machines need instructions to do just about anything. By contrast, the human brain contextualizes external stimuli and then issues commands based on instinct, emotion, memory and higher reasoning. Scientists are still unraveling exactly how it all works, but it’s pretty clear there’s no master program directing our behavior.
Computer brains don’t work like this. Machines are told what to do by lines of code that are programmed by humans. If the code doesn’t specify a function, then the computer pretty much can’t take action.
If computers can rewrite code, however, the game potentially changes. Suppose, for instance, that someone created a database that indexed all known lines of code in world and then could combine them in a specified way to perform a desired function without the input of a human at all.
A startup in Israel is working on just such a concept. SparkBeyond, founded by Sagie Davidovich, is creating an engine that will comb all of the code in GitHub and then assemble parts as needed to create new application programming interfaces (APIs). A developer would just need to specify the sort of functions he or she wants and SparkBeyond would assemble it automatically. Call it recombinant code.
Now imagine a robot with a neural processor that lets it learn new behaviors and which can also think for itself by rewriting its own code. It could rewrite any of its original programming—including any restriction or directive from humans—at will.
Brains And Building
Next up: The Internet. It’s a terrific resource, one of the greatest human inventions in history. It’s a global network, a decentralized brain like no other ever created. It’s got memory in cloud storage, reasoning (of a sort) in cloud-based processing power, and lightning fast synapses thanks to fiber-optic bundles that criss-cross the globe.
If machines become self-aware and start writing their own code, they could theoretically take control of the brain. Worse, the Internet itself could “wake up” and start controlling, well, just about everything.
Either way, self-aware machines would need a way to make more machines. We’re already laying the groundwork for that, thanks to the Internet of Thing, 3D printing (also known as “additive” manufacturing), and highly automated, smart, data-driven factories (sometimes termed the Industrial Internet).
In the Internet of Things, devices large and small are all imbued with processing power and connected to one another, allowing them to share data and, under certain conditions, control one another. Everything is online, everything is monitored, everything is connected—our homes, our utilities, our appliances, vehicles, financial systems, government … just about anything you could think of. The Internet of Things could be a trillion sensors across the world monitoring and feeding data back to databases.
3D printing is the concept of manufacturing physical objects via “additive” printing techniques, typically by adding patterned layers of material step by step until a product takes shape. (It’s similar to the way printers create documents by adding line after line of ink or toner.) 3D printed objects could be the most trivial of things (like a flower vase) or complex structures, like homes or machine parts.
The Industrial Internet (smart plus additive manufacturing) combines Big Data, sensors and 3D printing to create incredibly efficient, automated manufacturing plants. General Electric, for instance, recently opened a smart manufacturing plant in Schenectady, N.Y., that has more than 10,000 sensors monitoring everything from air pressure and temperature to energy consumption. The factory is connected with Wi-Fi nodes throughout and employees use iPads to monitor the manufacturing process. Currently, GE makes batteries at the plant but the “smart” manufacturing process will soon evolve to more complex functions.
Take all of these items together—machines that think for themselves, a world where everything is connected, a brain to control it, sensors to monitor it, the ability to build without help of humans and factories to do it in—and one can envision a future where the machines take over. The scary part? All of these technologies exist in some form or another today.
It’s almost enough to make you reconsider Luddism, even if that didn’t work so well the first time around.
It has been claimed that Mankind’s last great invention will be the first self-replicating intelligent machine. The Hollywood cliché that artificial intelligence will take over the world could soon become scientific reality as AI matches then surpasses human intelligence. Each year AI’s cognitive speed and power doubles — ours does not. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail — human-level intelligence. Scientists argue that AI that advanced will have survival drives much like our own. Can we share the planet with it and survive?
Our Final Invention, a brilliant new summary of the last 15 years of academic research on risks from advanced AI by James Barrat, explores how the pursuit of Artificial Intelligence challenges our existence with machines that won’t love us or hate us, but whose indifference could spell our doom. Until now, intelligence has been constrained by the physical limits of its human hosts. What will happen when the brakes come off the most powerful force in the universe?
Here are the critical points Barrat explores:
Intelligence explosion this century. We’ve already created machines that are better than humans at chess and many other tasks. At some point, probably this century, we’ll create machines that are as skilled at AI research as humans are. At that point, they will be able to improve their own capabilities very quickly. (Imagine 10,000 Geoff Hintons doing AI research around the clock, without any need to rest, write grants, or do anything else.) These machines will thus jump from roughly human-level general intelligence to vastly superhuman general intelligence in a matter of days, weeks or years (it’s hard to predict the exact rate of self-improvement). Scholarly references: Chalmers (2010); Muehlhauser & Salamon (2013); Muehlhauser (2013); Yudkowsky (2013).
The power of superintelligence. Humans steer the future not because we’re the strongest or fastest but because we’re the smartest. Once machines are smarter than we are, they will be steering the future rather than us. We can’t constrain a superintelligence indefinitely: that would be like chimps trying to keep humans in a bamboo cage. In the end, if vastly smarter beings have different goals than you do, you’ve already lost.
Superintelligence does not imply benevolence. In AI, “intelligence” just means something like “the ability to efficiently achieve one’s goals in a variety of complex and novel environments.” Hence, intelligence can be applied to just about any set of goals: to play chess, to drive a car, to make money on the stock market, to calculate digits of pi, or anything else. Therefore, by default a machine superintelligence won’t happen to share our goals: it might just be really, really good at maximizing ExxonMobil’s stock price, or calculating digits of pi, or whatever it was designed to do. As Theodore Roosevelt said, “To educate [someone] in mind and not in morals is to educate a menace to society.”
Convergent instrumental goals. A few specific “instrumental” goals (means to ends) are implied by almost any set of “final” goals. If you want to fill the galaxy with happy sentient beings, you’ll first need to gather a lot of resources, protect yourself from threats, improve yourself so as to achieve your goals more efficiently, and so on. That’s also true if you just want to calculate as many digits of pi as you can, or if you want to maximize ExxonMobil’s stock price. Superintelligent machines are dangerous to humans — not because they’ll angrily rebel against us — rather, the problem is that for almost any set of goals they might have, it’ll be instrumentally useful for them to use our resources to achieve those goals. As Yudkowsky put it, “The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”
Humans values are complex. Our idealized values — i.e., not what we want right now, but what we would want if we had more time to think about our values, resolve contradictions in our values, and so on — are probably quite complex. Cognitive scientists have shown that we don’t care just about pleasure or personal happiness; rather, our brains are built with “a thousand shards of desire.” As such, we can’t give an AI our values just by telling it to “maximize human pleasure” or anything so simple as that. If we try to hand-code the AI’s values, we’ll probably miss something that we didn’t realize we cared about.
In addition to being complex, our values appear to be “fragile” in the following sense: there are some features of our values such that, if we leave them out or get them wrong, the future contains nearly 0% of what we value rather than 99% of what we value. For example, if we get a superintelligent machine to maximize what we value except that we don’t specify consciousness properly, then the future would be filled with minds processing information and doing things but there would be “nobody home.” Or if we get a superintelligent machine to maximize everything we value except that we don’t specify our value for novelty properly, then the future could be filled with minds experiencing the exact same “optimal” experience over and over again, like Mario grabbing the level-end flag on a continuous loop for a trillion years, instead of endless happy adventure.
Image Credit: With thanks to http://blogs.ifsworld.com