Thursday, February 22, 2018

Friday Thinking 23 Feb. 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9




China's advantages in AI go beyond government commitment. Because of its sheer size, vibrant online commerce and social networks, and scant privacy protections, the country is awash in data, the lifeblood of deep learning systems. The fact that AI is a young field also works in China's favor, argues Chen Yunji, by encouraging a burgeoning academic effort that has put China within striking distance of the United States, long the leader in AI research. "For traditional scientific fields, Chinese [scientists] have a long way to go to compete with the U.S. or Europe. But for computer science, it's a relatively new thing. Young people can compete. Chinese can compete." In an editorial last week in The Boston Globe, Eric Lander, president of the Broad Institute in Cambridge, Massachusetts, warned that the United States has at best a 6-month lead over China in AI. "China played no role in launching the AI revolution, but is making breathtaking progress catching up," he wrote.

The fierce global competition in AI has downsides. University computer science departments are hollowing out as companies poach top talent. "Trends come and go, but this is the biggest one I've ever seen—a professor can go into industry to make $500,000 to $1 million" a year in the United States or China, says Michael Brown, a computer scientist at York University in Toronto, Canada.

In a more insidious downside, nations are seeking to harness AI advances for surveillance and censorship, and for military purposes. China's military "is funding the development of new AI-driven capabilities" in battlefield decision-making and autonomous weaponry, says Elsa Kania, a fellow at the Center for a New American Security in Washington, D.C. In the field of AI in China, she warned in a recent report, "The boundaries between civilian and military research and development tend to become blurred."

Facial recognition is now used routinely in China for shopping and to access some public services. For example, at a growing number of Kentucky Fried Chicken restaurants in China, customers can authorize digital payment by facial scan. Baidu's facial recognition systems confirm passenger identity at certain airport security gates. Recent AI advances have made it possible to identify individuals not only in up-close still photos, but also in video—a far more complex scientific task.

China’s massive investment in AI has an insidious downside

A popular misconception is that the potential — and the limits — of quantum computing must come from hardware. In the digital age, we’ve gotten used to marking advances in clock speed and memory. Likewise, the 50-qubit quantum machines now coming online from the likes of Intel and IBM have inspired predictions that we are nearing “quantum supremacy” — a nebulous frontier where quantum computers begin to do things beyond the ability of classical machines.

But quantum supremacy is not a single, sweeping victory to be sought — a broad Rubicon to be crossed — but rather a drawn-out series of small duels. It will be established problem by problem, quantum algorithm versus classical algorithm. “With quantum computers, progress is not just about speed,” said Michael Bremner, a quantum theorist at the University of Technology Sydney. “It’s much more about the intricacy of the algorithms at play.”

Each single qubit you add doubles the states the system can simultaneously store: Two qubits can store four states, three qubits can store eight states, and so on. Thus, you might need just 50 entangled qubits to model quantum states that would require exponentially many classical bits — 1.125 quadrillion to be exact — to encode.

Quantum Algorithms Struggle Against Old Foe: Clever Computers

One of the frames that I’ve explored for a long time is the idea of trying to take a long view of these things. My feeling is that one of our besetting sins at the moment, in relation for example to digital technology, is what Michael Mann once described as the sociology of the last five minutes. I’m constantly trying to escape from that. I write a newspaper column every week, and I've written a couple of books about this stuff. If you wanted to find a way of describing what I try to do, it is trying to escape from the sociology of the last five minutes.

Why is Gutenberg useful? He’s useful because he instills in us a sense of humility. The way I’ve come to explain that is with a thought experiment which I often use in talks and lectures. The thought experiment goes like this:

I want you to imagine that we’re back in Mainz, the small town on the Rhine where Gutenberg's press was established. The date is around 1476 or ’78, and you’re working for the medieval version of Gallup or MORI Pollsters. You’ve got a clip slate in your hand and you’re stopping people and saying, "Excuse me, madam, would you mind if I asked you some questions?" And here’s question four: "On a scale of 1 to 5, where 1 is definitely yes and 5 is definitely no, do you think that the invention of printing by movable type will A) undermine the authority of the Catholic Church, B) trigger and fuel a Protestant Reformation, C) enable the rise of something called modern science, D) enable the creation of entirely undreamed of and unprecedented professions, occupations, industries, and E) change our conception of childhood?"

The State of Informed Bewilderment - Conversation With John Naughton

To me, what makes the Anthropocene unprecedented and fully worthy of the name is our growing knowledge of what we are doing to this world. Self-conscious global change is a completely new phenomenon. It puts us humans into a category all our own and is, I believe, the best criterion for the real start of the era. The Anthropocene begins when we start to realise that it has begun. This definition also provides a new angle on the long-vexing question of what differentiates our species from other life. Perhaps more than anything else, it is self-aware world-changing that marks us as something new on the planet. What are we? We are the species that can change the world and come to see what we’re doing.

...The moment when cognitive processes become a dominant mechanism of change is easily as significant as the oxygenation of the atmosphere or the advent of multicellular animal life. If global intelligence becomes a lasting planetary force, then I believe it is appropriate to regard this as the beginning of Earth’s fifth aeon.

During the last ice age, humans huddled around campfires and told stories, solidifying our identity as social, collective, problem-solving beings. Today, the world is being woven together rapidly, and we’re building distributed electronic campfires and gathering around them, struggling to find a newly enlarged sense of identity and purpose. We are a global force now. We are not going to relinquish this capacity, so we need to finish what we started in the East African savannahs and Pleistocene caves. We have new tools at our disposal that might allow us to change again to meet new challenges. Don’t write off our potential to wake up, to grow up, to ‘human up’ to our responsibilities and our capabilities.

If we are going to live up to the sapiens in the name that Linnaeus optimistically gifted us with, then we need to become fully human. We have to remake our world and ourselves. We have to create Terra Sapiens.

Welcome to Terra Sapiens

One hundred and sixty years ago, the first transatlantic telegram traveled from Britain to the United States along a rickety undersea wire. It consisted of 21 words – and took seventeen hours to arrive.

Today, the same trip takes as little as 60 milliseconds. A dense mesh of fiber-optic cables girdles the world, pumping vast quantities of information across the planet. The McKinsey Global Institute estimates that 543 terabits of data are flowing across borders every second. That’s the equivalent of roughly 13 million copies of the complete works of Shakespeare.

The velocity and the volume of global communication aren’t the only things that have changed. So has its economic significance. Telegrams were useful for businessmen. But the flow of data now contributes more to world GDP than the flow of physical goods. In other words, there’s more money in moving information across borders than in moving soybeans and refrigerators.

Data is lifeblood of capitalism – don't hand corporate America control

Social theory also plays a critical role in understanding rare, catastrophic events, which can’t be assessed solely in terms of technical failure. Human error and forms of social organisation – such as the hierarchies used to manage sensitive technologies – often play a critical role in whether or not a crisis is averted, as the sociologist Charles Perrow argues in Normal Accidents: Living with High-Risk Technologies (1984). For example, prior to the Challenger disaster in 1986 – in which a space shuttle exploded shortly after a take-off, killing all the crew-members on board – it turned out that some NASA staff had been aware of potential problems, caused by the material used to seal the rotating joints. However, certain organisational norms prevented these worries being transmitted to those who had the power to delay the launch. Because of a failure to appreciate the power dynamics behind NASA’s structure, scientific knowledge could not prevent the crash, even when some of the scientists clearly saw the potential for disaster.

The tech bias: why Silicon Valley needs social theory

This is a must view 12 min video about the difference between rival economic goods and anti-rival economic goods. The fundamental difference is vital for developing an appropriate economic framework for the 21st century.

The Rivalrous and the Anti-Rivalrous

I've long considered this conceptual distinction to be one of the most important and least considered in the current environment.  I hope this video is at least some what clear.  For sure, it raises a lot more questions than it answers.  Like - "how"?

This is a full length film - 104 min long - by Jeremy Rifkin - worth the view for anyone interested in the emerging economy.

The Third Industrial Revolution: A Radical New Sharing Economy

The global economy is in crisis. The exponential exhaustion of natural resources, declining productivity, slow growth, rising unemployment, and steep inequality, forces us to rethink our economic models. Where do we go from here? In this feature-length documentary, social and economic theorist Jeremy Rifkin lays out a road map to usher in a new economic system.

A Third Industrial Revolution is unfolding with the convergence of three pivotal technologies: an ultra-fast 5G communication internet, a renewable energy internet, and a driverless mobility internet, all connected to the Internet of Things embedded across society and the environment.

This 21st century smart digital infrastructure is giving rise to a radical new sharing economy that is transforming the way we manage, power and move economic life. But with climate change now ravaging the planet, it needs to happen fast. Change of this magnitude requires political will and a profound ideological shift.  

This is an interesting signal - about the emerging potential of Blockchain technologies to re-imagine the Internet - and to enable a new institution of records.

Andreessen Horowitz is backing a crypto-powered 'internet computer' that could be the future of cloud computing

VCs are backing a not-for-profit foundation developing the DFINITY protocol and will be rewarded in crypto tokens that power the network once it launches.
DFINITY is "building a system that will enable the internet to act as a giant computer."
People can then build applications or companies on this decentralized "cloud 3.0."
The DFINITY deal represents the first time Andreessen Horowitz, a well-known Silicon Valley VC firm, has invested in a protocol — in other words, backed the development of a digital process — rather than a company.

"We're building a system that will enable the internet to act as a giant computer," said Dom Williams, DFINITY's president and chief scientist. "It will be an open protocol, it won't be supported by a company, it will be supported by whoever connects their computers to that protocol."

Williams told Business Insider that both Polychain and Andreessen Horowitz will receive a certain amount of DFINITY crypto tokens once the network fully launches. These tokens will power the decentralized "internet computer" and will also be given as rewards to computers that hook up to the network and provide "mining" power.

DFINITY's next-generation technology will allow applications and companies to be built on a decentralized "cloud 3.0," Williams said, which is made up of a network of computers connected over the internet.

"This internet computer is interesting for a number of reasons. First of all, it can act as a cloud that can host traditional business systems and it will provide an alternative to hosting your business on Amazon or Google Cloud or Microsoft Azure.
"It will provide a different kind of technology stack. The total cost of ownership of these business systems will be dramatically lower.
Here is the Link to DFINITY
The Internet Computer
A blockchain computer with unlimited capacity, incredible performance and algorithmic governance, shared by the world — Cloud 3.0
This is a 2 min video explaining DFINITY

DFINITY - Explainer Video

DFINITY is building an open, decentralized blockchain that runs smart contract software systems with vastly improved performance, capacity, and algorithmic governance.

This is a very good and sensible piece by Danah Boyd on the presence and use of Bots on Twitter. If you don’t know much about bots this is worth the read.
At the end of the day, I don’t really blame Twitter for giving these deeply engaged users what they want and turning a blind eye towards their efforts to puff up their status online. After all, the cosmetic industry is $55 billion. Then again, even cosmetic companies sometimes change their formulas when their products receive bad press.

The Reality of Twitter Puffery. Or Why Does Everyone Now Hate Bots?

Bots have been an intrinsic part of Twitter since the early days. Following the Pope’s daily text messaging services, the Vatican set up numerous bots offering Catholics regular reflections. Most major news organizations have bots so that you can keep up with the headlines of their publications. Twitter’s almost-anything-goes policy meant that people have built bots for all sorts of purposes. There are bots that do poetry, ones that argue with anti-vaxxers about their beliefs, and ones that call out sexist comments people post. I’m a big fan of the @censusAmericans bot created by FiveThirtyEight to regularly send out data from the Census about Americans.

Over the last year, sentiment towards Twitter’s bots has become decidedly negative. Perhaps most people didn’t even realize that there were bots on the site. They probably don’t think of @NYTimes as a bot. When news coverage obsesses over bots, they primarily associate the phenomenon with nefarious activities meant to seed discord, create chaos, and do harm. It can all be boiled down to: Russian bots. As a result, Congress saw bots as inherently bad and journalists keep accusing Twitter of having a “bot problem” without accounting for how their stories appear on Twitter through bots.

This is a wonderful 30 min video - where a designer talks of the last 20 years of web design - illuminating how we’ve entered the ‘Beta’ world - where the half-life of a skill seems to be ever shorter. This is relevant to all of us working in the digital environment - our tools become obsolete before they have a future.

Mirror Conf 2017 | Frank Chimero - How to Tell When You're Tired

Frank Chimero (key-mare-oh) is a designer, writer, and illustrator. He runs a one-man studio from Brooklyn, specializing in publication design for page and screen, digital design systems, and image-making.

How to Tell When You're Tired
The most famous story about work comes from Greek myth: Sisyphus woke each morning to push a rock up a mountain, then the next day found the rock back at the bottom, his progress gone and the challenge reset. After 20 years of making websites, his story sounds familiar to me. If expertise is impossible because everything changes so frequently, how does a person develop wisdom about their work? How do you not burn out? Are these silly questions? Maybe, but change begins with diligent effort and a new perspective on the work.

Here a strong signal of the emerging digital environment - whether it will emerge as a surveillance society of a society ensuring the right to not be interfered with via mutual accountability - these sorts of capabilities are likely unstoppable.

UK police are now using fingerprint scanners on the streets to identify people in less than a minute

The system being used by West Yorkshire Police searches the 12 million fingerprint records kept in the UK's criminal and immigration databases
Police in the UK have started using a mobile fingerprinting system that lets them check the identity of an unknown person in less than a minute. Fingerprints collected on the street will be compared against the 12 million records contained in national criminal and immigration fingerprint databases and, if a match is found, will return the individual’s name, date of birth and other identifying information.

Officers will only resort to fingerprint scanning if they cannot identify an individual by other means, says Clive Poulton, who helped manage the project at the Home Office. The devices might be used in cases where someone has no identifying information on them, or appears to be giving police a fake name. “[Police] can now identify the person in front of them – whether they are known to them or not known to them, and then they can deal with them,” Poulton says.

There are currently two major national databases of fingerprints. The first, called IDENT1, contains fingerprints gathered by the police when they take someone into custody. Anyone convicted of a serious crime may have their fingerprints stored on the database indefinitely. People who were not convicted but are arrested or charged in connection with a serious crime may also have their fingerprints stored on the database for up to five years, or indefinitely if they were convicted of another crime.

Advances in fundamental science seem to be approaching the realms of magic.
In the quantum world, whenever you look at a system, or measure it, the system changes. And in this case, unstable particles can never decay while they’re being measured (just like the proverbial watched kettle that will never boil), so the quantum Zeno effect creates a system that’s effectively frozen with a very high probability.

Scientists Achieve Direct Counterfactual Quantum Communication For The First Time

For the first time in the history of quantum mechanics, scientists have been able to transmit a black and white image without having to send any physical particles. The phenomenon can be explained using the Zeno effect, the same effect that explains that movement itself is impossible.
Regular quantum teleportation is based on the principle of entanglement – two particles that become inextricably linked so that whatever happens to one will automatically affect the other, no matter how far apart they are.

But that form of quantum teleportation still relies on particle transmission in some form or another. The two particles usually need to be together when they’re entangled before being sent to the people on either end of the message (so, they start in one place, and need to be transmitted to another before communication can occur between them).

Direct counterfactual quantum communication, on the other hand, relies on something other than quantum entanglement. Instead, it uses a phenomenon called the quantum Zeno effect.
Very simply, the quantum Zeno effect occurs when an unstable quantum system is repeatedly measured.

The phase transition in human understanding of biology - continues in an exponential progress toward domestication of DNA. This article also provides a simple description of how CRISPR works.
The discovery of NmeCas9 happened by accident when the team was studying the basic function of the NmeCas9 protein in cutting DNA. The team was using RNA as the comparison, or a control sample—but noticed that it was getting cut, too.

Unlike CRISPR-Cas9, this protein can cut RNA

Researchers have discovered a single protein that can perform CRISPR-style, precise programmable cutting on both DNA and RNA.
This protein is among the first few Cas9 proteins to work on both types of genetic material without artificial helper components.

CRISPR-Cas9 acts as molecular scissors that can cut DNA at exactly the spot they’re asked to. The technique has transformed research in just five years, making it possible for hundreds of teams of scientists to snip out portions of a chromosome that are mutated, or to see what happens when a certain gene isn’t there. But CRISPR-Cas9 can’t cut the other kind of genetic material found in cells known as RNA.

Now, an initial biochemical study in laboratory test tubes, published in the journal Molecular Cell, shows the promise of the new CRISPR approach using the protein called NmeCas9. It’s derived from Neisseria meningitidis, the bacteria that cause some of the most severe and deadly cases of meningitis each year.

The team is working to test the tool in living bacteria cells to see if NmeCas9 achieves the same effect that they saw in test tubes. They hope to eventually progress to human cells. If it works, NmeCas9 could help expand the role of CRISPR in studying—and perhaps intervening—in many diseases.

This is an interesting signal - for the possible acceleration of the domestication of DNA. If this startup doesn’t succeed - the approach may be advanced by others.

Biologists would love to program cells as if they were computer chips

A startup is selling “circuits” for making drugs and chemicals inside bacteria.
Sitting in his startup lab space on the outskirts of MIT’s campus, Alec Nielsen opens his laptop and types in instructions for a genetically modified yeast cell that will glow yellow. He tells the program what sugars he plans to feed the cell—arabinose and lactose—and specifies that it should make a fluorescent protein normally found in jellyfish.

The computer takes about 60 seconds before spitting out a list of roughly 11,000 letters of DNA, along with what looks like a circuit diagram.

Nielsen heads a startup called Asimov that is trying to automate the design of sophisticated genetic modifications. Its software, called CELLO, is modeled on the type used to plan electronic circuits and computer chips with billions of transistors.

Manufacturing proteins, biofuels, or chemicals inside cells isn’t new. That is where insulin, alcohol, and the enzymes in laundry detergent come from. But getting a microbe to make what you want—when you want—without dropping dead from the effort isn’t easy.

Now scientists are designing a new generation of organisms that do more than continuously pump out gene products like factories. They want them to sense and respond to environmental cues, turn on at certain times, or become smart cancer drugs that are deadly only inside a tumor.

Think of the project as something like Henry Ford’s first automobile—hand built and, for now, one of a kind. One day, though, we may routinely design genomes on computer screens. Instead of engineering or even editing the DNA of an organism, it could become easier to just print out a fresh copy. Imagine designer algae that make fuel; disease-proof organs; even extinct species resurrected.

Another signal in the emergence of domesticated DNA.
“Over the next 10 years synthetic biology is going to be producing all kinds of compounds and materials with microorganisms,” says Boeke. “We hope that our yeast is going to play a big role in that.”

It took Boeke and his team eight years before they were able to publish their first fully artificial yeast chromosome. The project has since accelerated. Last March, the next five synthetic yeast chromosomes were described in a suite of papers in Science, and Boeke says that all 16 chromosomes are now at least 80 percent done. These efforts represent the largest amount of genetic material ever synthesized and then joined together.

The result is high-speed, human-driven evolution: millions of new yeast strains with different properties can be tested in the lab for fitness and function in applications like, eventually, medicine and industry.

In the future we won’t edit genomes—we’ll just print out new ones

Why redesigning the humble yeast could kick off the next industrial revolution.
At least since thirsty Sumerians began brewing beer thousands of years ago, Homo sapiens has had a tight relationship with Saccharomyces cerevisiae, the unicellular fungus better known as brewer’s yeast. Through fermentation, humans were able to harness a microscopic species for our own ends. These days yeast cells produce ethanol and insulin and are the workhorse of science labs.

That doesn’t mean S. cerevisiae can’t be further improved—at least not if Jef Boeke has his way. The director of the Institute for Systems Genetics at New York University’s Langone Health, Boeke is leading an international team of hundreds dedicated to synthesizing the 12.5 million genetic letters that make up a yeast’s cells genome.

In practice, that means gradually replacing each yeast chromosome—there are 16 of them—with DNA fabricated on stove-size chemical synthesizers. As they go, Boeke and collaborators at nearly a dozen institutions are streamlining the yeast genome and putting in back doors to let researchers shuffle its genes at will. In the end, the synthetic yeast—called Sc2.0—will be fully customizable.

Talking about programmable biology - this is a good signal of the emerging biotechnology.

New DNA nanorobots successfully target and kill off cancerous tumors

Science fiction no more — in an article out today in Nature Biotechnology, scientists were able to show tiny autonomous bots have the potential to function as intelligent delivery vehicles to cure cancer in mice.

These DNA nanorobots do so by seeking out and injecting cancerous tumors with drugs that can cut off their blood supply, shriveling them up and killing them.
“Using tumor-bearing mouse models, we demonstrate that intravenously injected DNA nanorobots deliver thrombin specifically to tumor-associated blood vessels and induce intravascular thrombosis, resulting in tumor necrosis and inhibition of tumor growth,” the paper explains.

DNA nanorobots are a somewhat new concept for drug delivery. They work by getting programmed DNA to fold into itself like origami and then deploying it like a tiny machine, ready for action.

This is a very good signal for the future of our health and lifespan.
"This approach bypasses the need to identify tumour-specific immune targets and doesn't require wholesale activation of the immune system or customisation of a patient's immune cells."

A New Cancer 'Vaccine' Wiped Out Tumours in Mice, And Human Trials Are Next

This treatment is incredibly effective.
An injectable "vaccine" delivered directly to tumours in mice has been found to eliminate all traces of those tumours, cancer researchers have found - and it works on many different kinds of cancers, including untreated metastases in the same animal.

Scientists at Stanford University School of Medicine have developed the potential treatment using two agents that boost the body's immune system, and a human clinical trial in lymphoma patients is currently underway.

"When we use these two agents together, we see the elimination of tumours all over the body," said senior researcher, oncologist Ronald Levy.

To test it, laboratory mice were transplanted with mouse lymphoma in two places, or genetically engineered to develop breast cancer.
Of the 90 mice with lymphoma, 87 were completely cured - the treatment was injected into one tumour, and both were destroyed. The remaining 3 had a recurrence of the lymphoma, which cleared up after a second treatment.

This is a great signal of the transformation of our understanding of biology, evolution - including selection and more importantly cognition
"These processes underlie brain functions ranging from classical operant conditioning to human cognition and the concept of 'self,'"

An Ancient Virus May Be Responsible for Human Consciousness

You've got an ancient virus in your brain. In fact, you've got an ancient virus at the very root of your conscious thought.

According to two papers published in the journal Cell in January, long ago, a virus bound its genetic code to the genome of four-limbed animals. That snippet of code is still very much alive in humans' brains today, where it does the very viral task of packaging up genetic information and sending it from nerve cells to their neighbors in little capsules that look a whole lot like viruses themselves. And these little packages of information might be critical elements of how nerves communicate and reorganize over time — tasks thought to be necessary for higher-order thinking, the researchers said.

Though it may sound surprising that bits of human genetic code come from viruses, it's actually more common than you might think: A review published in Cell in 2016 found that between 40 and 80 percent of the human genome arrived from some archaic viral invasion.

That's because viruses aren't just critters that try to make a home in a body, the way bacteria do. Instead, as Live Science has previously reported, a virus is a genetic parasite. It injects its genetic code into its host's cells and hijacks them, turning them to its own purposes — typically, that means as factories for making more viruses. This process is usually either useless or harmful to the host, but every once in a while, the injected viral genes are benign or even useful enough to hang around. The 2016 review found that viral genes seem to play important roles in the immune system, as well as in the early days of embryo development.

Another signal in the trajectory of Moore’s Law is Dead - Long Live Moore’s Law of computational paradigms.
“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Brain-Like Chips Now Beat the Human Brain in Speed and Efficiency

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.
Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.
“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.
But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

The continuing decrease in solar energy may have even more acceleration in the next decade. The 3 min video is worth the view.

Semiconductor Breakthrough May Be Game-changer for Organic Solar Cells

In an advance that could push cheap, ubiquitous solar power closer to reality, University of Michigan researchers have found a way to coax electrons to travel much further than was previously thought possible in the materials often used for organic solar cells and other organic semiconductors.

“For years, people had treated the poor conductivity of organics as an unavoidable fact, and this shows that that’s not always the case,” said Stephen Forrest, the Peter A. Franken Distinguished University Professor of Engineering and Paul G. Goebel Professor of Engineering at U-M, who led the research.

Unlike the inorganic solar cells widely used today, organics can be made of inexpensive, flexible carbon-based materials like plastic. Manufacturers could churn out rolls of them in a variety of colors and configurations, to be laminated unobtrusively into almost any surface. Organics’ notoriously poor conductivity, however, has slowed research. Forrest believes this discovery could change the game. The findings are detailed in a paper published online January 17, 2018 in the journal Nature.

In the emerging space of self-driving transport, AI, robotic not only does mass transit have to be re-imagined - but many forms of service economy have to be as well. This is an interesting possibility - why deliver the milk when you can deliver the store? There a 2 min video and some interesting graphics.

The Grocery Store Of The Future Is Mobile, Self-Driving, And Run By AI

Can the Moby store bring locally controlled convenience stores to places that lack a simple place to buy essentials?
In Shanghai, a prototype of a new 24-hour convenience store has no staff, no registers, and the whole thing is on wheels, designed to eventually drive itself to a warehouse to restock, or to a customer to make a delivery.

The startup behind it believes that it’s the model for the grocery store of the future–and because it’s both mobile and far cheaper to build and operate than a typical store, it could also help bring better access to groceries to food deserts and rural areas.

For consumers, it’s designed to be an easier way to shop. To use the store, called Moby, you download an app and use your phone to open the door. A hologram-like AI greets you, and, as you shop, you scan what you want to buy or place it in a smart basket that tracks your purchases. Then you walk out the door; instead of waiting in line, the store automatically charges your card when you leave (Amazon is testing a similar system). The tiny shop will stock fresh food and other daily supplies, and if you want something else you can order it using the store’s artificial intelligence. The packages will be waiting when you return to shop the next time. When autonomous vehicles are allowed on roads, the store could also show up at your home, and the company is also testing a set of drones to make small deliveries.

This was embed in a presentation noted above - but it’s a must see scientific verification - 1 min video.

Real life Rabbit vs Tortoise race...