Thursday, May 3, 2018

Friday Thinking 4 May 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:

Articles:



Stanislaw Lem thus concludes that if our technological civilization is to avoid falling into decay, human obsolescence in one form or another is unavoidable. The sole remaining option for continued progress would then be the “automatization of cognitive processes” through development of algorithmic “information farms” and superhuman artificial intelligences. This would occur via a sophisticated plagiarism, the virtual simulation of the mindless, brute-force natural selection we see acting in biological evolution, which, Lem dryly notes, is the only technique known in the universe to construct philosophers, rather than mere philosophies.

The result is a disconcerting paradox, which Lem expresses early in the book: To maintain control of our own fate, we must yield our
agency to minds exponentially more powerful than our own, created through processes we cannot entirely understand, and hence potentially unknowable to us. This is the basis for Lem’s explorations of The Singularity, and in describing its consequences he reaches many conclusions that most of its present-day acolytes would share. But there is a difference between the typical modern approach and Lem’s, not in degree, but in kind.

Unlike the commodified futurism now so common in the bubble-worlds of Silicon Valley billionaires, Lem’s forecasts weren’t really about seeking personal enrichment from market fluctuations, shiny new gadgets, or simplistic ideologies of “disruptive innovation.” In Summa Technologiae and much of his subsequent work, Lem instead sought to map out the plausible answers to questions that today are too often passed over in silence, perhaps because they fail to neatly fit into any TED Talk or startup business plan: Does technology control humanity, or does humanity control technology? Where are the absolute limits for our knowledge and our achievement, and will these boundaries be formed by the fundamental laws of nature or by the inherent limitations of our psyche? If given the ability to satisfy nearly any material desire, what is it that we actually would want?

To Lem (and, to their credit, a sizeable number of modern thinkers), the Singularity is less an opportunity than a question mark, a multidimensional crucible in which humanity’s future will be forged.

“I feel that you are entering an age of metamorphosis; that you will decide to cast aside your entire history, your entire heritage and all that remains of natural humanity—whose image, magnified into beautiful tragedy, is the focus of the mirrors of your beliefs; that you will advance (for there is no other way), and in this, which for you is now only a leap into the abyss, you will find a challenge, if not a beauty; and that you will proceed in your own way after all, since in casting off man, man will save himself.”

The Book No One Read



Urbanisation might be the most profound change to human society in a century, more telling than colour, class or continent
At some unknown moment between 2010 and 2015, for the first time in human history, more than half the world’s population lived in cities. Urbanisation is unlikely to reverse. Every week since, another 3 million country dwellers have become urbanites. Rarely in history has a small number of metropolises bundled as much economic, political and cultural power over such vast swathes of hinterlands. In some respects, these global metropolises and their residents resemble one another more than they do their fellow nationals in small towns and rural area. Whatever is new in our global age is likely to be found in cities.

For centuries, philosophers and sociologists, from Jean-Jacques Rousseau to Georg Simmel, have alerted us to how profoundly cities have formed our societies, minds and sensibilities. The widening political polarisation between big cities and rural areas, in the United States as well as Europe, has driven home the point of quite how much the relationship between cities and the provinces, the metropolis and the country, shapes the political lives of societies. The history of cities is an extraordinary guide to understanding today’s world. Yet, compared with historians at large, as well as more present-minded scholars of urban studies, urban historians have not featured prominently in public conversation as of late.

A metropolitan world




Late on the night of October 4, 1957, Communist Party Secretary Nikita Khrushchev was at a reception at the Mariinsky Palace, in Kiev, Ukraine, when an aide called him to the telephone. The Soviet leader was gone a few minutes. When he reappeared at the reception, his son Sergei later recalled, Khrushchev’s face shone with triumph. “I can tell you some very pleasant and important news,” he told the assembled bureaucrats. “A little while ago, an artificial satellite of the Earth was launched.” From its remote Kazakh launchpad, Sputnik 1 had lifted into the night sky, blasting the Soviet Union into a decisive lead in the Cold War space race.
News of the launch spread quickly. In the US, awestruck citizens wandered out into their backyards to catch a glimpse of the mysterious orb soaring high above them in the cosmos. Soon the public mood shifted to anger – then fear. Not since Pearl Harbour had their mighty nation experienced defeat. If the Soviets could win the space race, what might they do next?

Keen to avert a crisis, President Eisenhower downplayed Sputnik’s significance. But, behind the scenes, he leapt into action. By mid-1958 Eisenhower announced the launch of a National Aeronautics and Space Administration (better known today as Nasa), along with a National Defense and Education Act to improve science and technology education in US schools. Eisenhower recognised that the battle for the future no longer depended on territorial dominance. Instead, victory would be achieved by pushing at the frontiers of the human mind.

Sixty years later, Chinese President Xi Jinping experienced his own Sputnik moment. This time it wasn’t caused by a rocket lifting off into the stratosphere, but a game of Go – won by an AI. For Xi, the defeat of the Korean Lee Sedol by DeepMind’s Alpha Go made it clear that artificial intelligence would define the 21st century as the space race had defined the 20th.

The event carried an extra symbolism for the Chinese leader. Go, an ancient Chinese game, had been mastered by an AI belonging to an Anglo-American company. As a recent Oxford University report confirmed, despite China’s many technological advances, in this new cyberspace race, the West had the lead.

China’s children are its secret weapon in the global AI arms race




The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

Against metrics: how measuring performance by numbers backfires




The power and potential of computation to tackle important problems has never been greater. In the last few years, the cost of computation has continued to plummet. The Pentium IIs we used in the first year of Google performed about 100 million floating point operations per second. The GPUs we use today perform about 20 trillion such operations — a factor of about 200,000 difference — and our very own TPUs are now capable of 180 trillion (180,000,000,000,000) floating point operations per second.

Even these startling gains may look small if the promise of quantum computing comes to fruition. For a specialized class of problems, quantum computers can solve them exponentially faster. For instance, if we are successful with our 72 qubit prototype, it would take millions of conventional computers to be able to emulate it. A 333 qubit error-corrected quantum computer would live up to our name, offering a 10,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000,000,000, 000,000,000,000,000,000,000,000x speedup.

There are several factors at play in this boom of computing. First, of course, is the steady hum of Moore’s Law, although some of the traditional measures such as transistor counts, density, and clock frequencies have slowed. The second factor is greater demand, stemming from advanced graphics in gaming and, surprisingly, from the GPU-friendly proof-of-work algorithms found in some of today’s leading cryptocurrencies, such as Ethereum. However, the third and most important factor is the profound revolution in machine learning that has been building over the past decade. It is both made possible by these increasingly powerful processors and is also the major impetus for developing them further.

The new spring in artificial intelligence is the most significant development in computing in my lifetime. When we started the company, neural networks were a forgotten footnote in computer science; a remnant of the AI winter of the 1980’s. Yet today, this broad brush of technology has found an astounding number of applications.

Every month, there are stunning new applications and transformative new techniques. In this sense, we are truly in a technology renaissance, an exciting time where we can see applications across nearly every segment of modern society.

However, such powerful tools also bring with them new questions and responsibilities. How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?

Sergey Brin - Alphabet - 2017 Founders’ Letter




Added to this, there is technological anxiety, too – what is it to be a man when there are so many machines? Thus, Dilov invents a Fourth Law of Robotics, to supplement Asimov’s famous three, which states that ‘the robot must, in all circumstances, legitimate itself as a robot’. This was a reaction by science to the roboticists’ wish to give their creations ever more human qualities and appearance, making them subordinate to their function – often copying animal or insect forms.

Finally, it was Kesarovski’s time. He was a populariser of science, often writing computer guides for children, as well as essays that lauded information technology as a solution to future problems. This was reflected in his short stories, three of which were published in the collection The Fifth Law of Robotics (1983). In the first, he explored a vision of the human body as a cybernetic machine. A scientist looking for proof of alien consciousness finds it – in his own blood cells. Deciphering messages sent by an alien mind, trying to decode what their descriptions of society actually mean, he gradually comes to understand his own body as a sort of robot.

Kesarovski’s vision of nesting cybernetic machines – turtles all the way down or up – indicates his own training as one of the regime’s specialists: he was a more optimistic writer than Dilov.

In Kesarovski’s telling, the Fifth Law [or Robotics] states that ‘a robot must know it is a robot’. As the novella progresses, we face a cyborg that melds the best machine and human mind together…  For Kesarovski, computers and robots held dangers, but also a promise, if humanity could one day see that it was both a type of robot itself, and in a position only to gain from the machines’ powers, allowing it to attain the next step in its historical progress.

Communist robot dreams





This is a nice summary of the history of AI to this point by Rodney Brookes.
The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

The Origins of “Artificial Intelligence”

THE EARLY DAYS
It is generally agreed that John McCarthy coined the phrase “artificial intelligence” in the written proposal for a 1956 Dartmouth workshop, dated August 31st, 1955. It is authored by, in listed order, John McCarthy of Dartmouth, Marvin Minsky of Harvard, Nathaniel Rochester of IBM and Claude Shannon of Bell Laboratories. Later all but Rochester would serve on the faculty at MIT, although by early in the sixties McCarthy had left to join Stanford University. The nineteen page proposal has a title page and an introductory six pages (1 through 5a), followed by individually authored sections on proposed research by the four authors. It is presumed that McCarthy wrote those first six pages which include a budget to be provided by the Rockefeller Foundation to cover 10 researchers.

The title page says A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE. The first paragraph includes a sentence referencing “intelligence”:
The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.

And then the first sentence of the second paragraph starts out:
The following are some aspects of the artificial intelligence problem:

That’s it! No description of what human intelligence is, no argument about whether or not machines can do it (i.e., “do intelligence”), and no fanfare on the introduction of the term “artificial intelligence” (all lower case).


This is a great summary from Nature, of the current state regarding memristors and their potential for a new computational paradigm.

The future of electronics based on memristive systems

Abstract
A memristor is a resistive device with an inherent memory. The theoretical concept of a memristor was connected to physically measured devices in 2008 and since then there has been rapid progress in the development of such devices, leading to a series of recent demonstrations of memristor-based neuromorphic hardware systems. Here, we evaluate the state of the art in memristor-based electronics and explore where the future of the field lies. We highlight three areas of potential technological impact: on-chip memory and storage, biologically inspired computing and general-purpose in-memory computing. We analyse the challenges, and possible solutions, associated with scaling the systems up for practical applications, and consider the benefits of scaling the devices down in terms of geometry and also in terms of obtaining fundamental control of the atomic-level dynamics. Finally, we discuss the ways we believe biology will continue to provide guiding principles for device innovation and system optimization in the field.


This is another signal of the emergence of a new scientific paradigm into everyday reality.

Spooky quantum entanglement goes big in new experiments

Two teams entangled the motions of two types of small, jiggling devices
Quantum entanglement has left the realm of the utterly minuscule, and crossed over to the just plain small. Two teams of researchers report that they have generated ethereal quantum linkages, or entanglement, between pairs of jiggling objects visible with a magnifying glass or even the naked eye — if you have keen vision.

Physicist Mika Sillanpää and colleagues entangled the motion of two vibrating aluminum sheets, each 15 micrometers in diameter — a few times the thickness of spider silk. And physicist Sungkun Hong and colleagues performed a similar feat with 15-micrometer-long beams made of silicon, which expand and contract in width in a section of the beam. Both teams report their results in the April 26 Nature.

“It’s a first demonstration of entanglement over these artificial mechanical systems,” says Hong, of the University of Vienna. Previously, scientists had entangled vibrations in two diamonds that were macroscopic, meaning they were visible (or nearly visible) to the naked eye. But this is the first time entanglement has been seen in macroscopic structures constructed by humans, which can be designed to meet particular technological requirements.


It’s taking longer than many anticipated to execute powerful augmented reality - here’s one very good signal.
“There are lots of applications for this technology, including in teaching, physiotherapy, laparoscopic surgery and even surgical planning,” said Watts, who developed the technology with fellow graduate student Michael Fiest.

Augmented reality system lets doctors see under patients’ skin without the scalpel

New technology lets clinicians see patients’ internal anatomy displayed right on the body.
New technology is bringing the power of augmented reality into clinical practice.
The system, called ProjectDR, allows medical images such as CT scans and MRI data to be displayed directly on a patient’s body in a way that moves as the patient does.

“We wanted to create a system that would show clinicians a patient’s internal anatomy within the context of the body,” explained Ian Watts, a computing science graduate student and the developer of ProjectDR.

The technology includes a motion-tracking system using infrared cameras and markers on the patient’s body, as well as a projector to display the images. But the really difficult part, Watts explained, is having the image track properly on the patient’s body even as they shift and move. The solution: custom software written by Watts that gets all of the components working together.


This is a must see gif - a 2 sec video.

Girl lost both her hands as a baby. Here she is testing the precision and dexterity of her new 3D-printed bionics




This is an important signal to watch - right now it’s definitely a weak signal - but in the coming decades it may displace many current professional sports domains. The key is not just the looming ‘virtual reality’ dimension - it is the blending of very wide variety of participation by pros and fans, spectatorship and production of a vast diversity of gaming genres and actual games.

NINJA’S FORTNITE TOURNAMENT WAS AN EXHILARATING AND UNPRECEDENTED E-SPORTS EXPERIMENT

The future of e-sports may be in hybrid entertainment that puts fans, pros, and streamers together in the same server
As I found myself seated at a gaming PC on the floor of the Luxor hotel’s exuberant Esports Arena, preparing to play Fortnite among the best players in the world, I can say my confidence levels were not very high. I had never played video games competitively before, though not for lack of trying. I consider myself an above average player of most shooters, from the early days of Halo and Call of Duty to now Destiny and Overwatch. Yet the feeling of playing under this kind of pressure and against players of this caliber was alien to me.

But I went to Las Vegas last weekend to see what the first big Fortnite e-sports tournament was going to look like, and specifically how it would feel to participate in it. Unlike most e-sports competitions, this one let members of the public compete, and it all centered on the chance to play against Tyler “Ninja” Blevins, the most popular streamer on Twitch and one of the world’s most talented Fortnite players. Just a few minutes into my first match on Saturday evening, one of nine consecutive games Ninja would participate in, I found myself under fire. Seconds later, an opponent descended on me and took me out with a shotgun blast. I never stood a chance.

I looked up at the big screen behind me and off to the left, an enormous monitor featuring Ninja’s perspective spanning the entire back wall of the arena. Below it, Ninja was playing on his own custom machine located center stage. I wanted to see whether it was the Twitch star that had taken me out. Thankfully, it wasn’t; my poor performance wasn’t broadcasted to hundreds of thousands of people watching online. But in a way, it would have been an honor to say I got to personally face off against one of the best, even if I inevitably lost. And that’s precisely what made the event, officially called Ninja Vegas 18, such an unprecedented e-sports experiment.

As for Ninja’s event, it was a resounding success. Although Ninja won only one of his nine games, the level of competition from both professional players and relatively unknown competitors was wildly entertaining, creating dozens of crowd-pleasing moments and surprise victories. And viewers agreed — more than 667,000 people tuned in to Ninja’s personal Twitch stream at the tournament’s peak. It broke the platform’s all-time concurrent viewer record Ninja himself set back in March when he live streamed a Fortnite session with Drake, NFL player JuJu Smith-Schuster, and rapper Travis Scott.


This is an interesting 5 min read about the difference between the ‘evils of adtech’ versus real advertising. There are some worthwhile links in the article.

How True Advertising Can Save Journalism From Drowning in a Sea of Content

Journalism is in a world of hurt because it has been marginalized by a new business model that requires maximizing “content” instead. That model is called adtech.

We can see adtech’s effects in The New York Times’ In New Jersey, Only a Few Media Watchdogs Are Left, by David Chen. His prime example is the Newark Star-Ledger, “which almost halved its newsroom eight years ago,” and “has mutated into a digital media company requiring most reporters to reach an ever-increasing quota of page views as part of their compensation.”

That quota is to attract adtech placements.
While adtech is called advertising and looks like advertising, it’s actually a breed of direct marketing, which is a cousin of spam and descended from what we still call junk mail.


Quorum Sensing is a widespread strategy for group decisioning at many levels of life - from bacteria (e.g. slime molds), insects, and mammals - perhaps even ecosystems.

How to Sway a Baboon Despot

What other species can teach us about democracy
Early last year, more than 70 years after its publication, George Orwell’s Animal Farm appeared on The Washington Post’s best-seller list. A writer for the New York Observer declared the novel—an allegory involving a government run by pigs—a “guidepost” for politics in the age of Donald Trump. A growing body of research, however, suggests that animals may offer political lessons that are more than allegorical: Many make decisions using familiar political systems, from autocracy to democracy. How these systems function, and when they falter, may be telling for Homo sapiens.

As in human democracies, the types of votes in animal democracies vary. When deciding where to forage, for instance, Tonkean macaques line up behind their preferred leader; the one with the most followers wins. Swans considering when to take flight bob their heads until a “threshold of excitability” is met, at which point they collectively rise into the sky. Honeybee colonies needing a new home vote on where to go: Thomas Seeley, a Cornell biologist, has found that scout bees investigate the options and inform the other bees of potential sites through complex “waggle dances” that convey basic information (distance, direction, overall quality). When a majority is won over by a scout’s campaign, the colony heads for its new home.

Research also shows that animal democracies, like human ones, can go awry. For instance, Seeley found that bees sometimes chose a mediocre—even terrible—site over an objectively better option. When this happened, it was invariably because they had “satisficed”—that is, settled for a plausible choice that came in early, rather than waiting for more options. Seeley told me he once saw several bees return to a hive and perform “unenthusiastic, lethargic” dances. With no great choices, they began coalescing around the best of the middling ones. At the last minute, though, “one bee came back, and she was so excited,” Seeley said. “She danced and danced and danced. She must have found something wonderful. But it was too late.” The bees had picked their candidate; momentum carried the day.

Why do bees take a vote to begin with, though? In 2013, researchers at the Max Planck Institute for Human Development, the London School of Economics, and the University of Sussex used game theory to show that animals’ willingness to behave democratically redounds to their benefit. Compared with decisions handed down by tyrant leaders, democratic decisions are less likely to be flawed. Moreover, when animals have a chance to register their opinion, the gap between the average individual’s preferred outcome and the actual outcome tends to be smaller than it would be if the decision were made by fiat. In this way, animal democracy is stabilizing; few get their way, but most are relatively content.


This is an amazing development in our understanding of how cellular ecosystems can communicate and exchange matter between themselves - especially when under stress. This is a complementary means to horizontal gene transfer. This illustrating gif is worth the view.

Cells Talk and Help One Another via Tiny Tube Networks

How did the tunneling nanotubes go unnoticed for such a long time? Lou notes that in the last couple of decades, cancer research has centered primarily on detecting and therapeutically targeting mutations in cancer cells — and not the structures between them. “It’s right in front of our face, but if that’s not what people are focusing on, they’re going to miss it,” he said.

That’s changing now. In the last few years, the number of researchers working on TNTs and figuring out what they do has risen steeply. Research teams have discovered that TNTs transfer all kinds of cargo beyond microRNAs, including messenger RNAs, proteins, viruses and even whole organelles, such as lysosomes and mitochondria.

To understand whether or not the cells actively regulate these transfers, Haimovich challenged them with heat shock and oxidative stress. If changes in the environmental conditions changed the rate of RNA transfer, that “would suggest that this is a biologically regulated mechanism, not just diffusion of RNA by chance,” he explained. He found that oxidative stress did induce an increase in the rate of transfer, while heat shock induced a decrease. Moreover, this effect was seen if stress was inflicted on acceptor cells but not if it was also inflicted on donor cells prior to co-culture, Haimovich clarified by email. “This suggests that acceptor cells send signals to the donor cells ‘requesting’ mRNA from their neighbors,” he said. His results were reported in the Proceedings of the National Academy of Sciences last year.

“Our general hypothesis is that when a cell is in danger or is dying or is stressed, the cell tries to implement a way of communication that is normally used during development, because we believe that these TNTs are more for fast communication in a developing organism,” she said. “However, when the cell is affected by a disease or infected by a virus or prion, the cell is stressed out, and it sends these protrusions to try to get help from cells that are in good health — or to discharge the prions.”


The world of plants is full of surprises.

Trees are not as 'sound asleep' as you may think

High-precision three-dimensional surveying of 21 different species of trees has revealed a yet unknown cycle of subtle canopy movement during the night. The 'sleep cycles' differed from one species to another. Detection of anomalies in overnight movement could become a future diagnostic tool to reveal stress or disease in crops.

Overnight movement of leaves is well known for tree species belonging to the legume family, but it was only recently discovered that some other trees also lower their branches by up to 10 centimeters at night and then back in the morning. These branch movements are slow and subtle, and take place at night, which makes them difficult to identify with the naked eye. However, terrestrial laser scanning, a 3-dimensional surveying technique developed for precision mapping of buildings, makes it possible to measure the exact position of branches and leaves.


This is a very interesting project - how individuals, crowdsourced and other funding can catalyze ways to mitigate large problems.
“I would never be able to work on a photo-sharing app or ‘internet startup XYZ,'” he says. “I think people overestimate the risk of high-risk projects. Personally, I think I would find it much harder to make a photo-sharing app a success–it sounds counterintuitive, because it’s much easier from an engineering perspective, but I think if you work on something that’s truly exciting and bold and complicated, then you will attract the kind of people that are really smart and talented. People that like solving complicated problems.”

The Revolutionary Giant Ocean Cleanup Machine Is About To Set Sail

Boyan Slat dropped out of school to work on his design for a device that could collect the trillions of pieces of plastic floating in the ocean. After years of work, it’s ready to take its first voyage.
Six years ago, the technology was only an idea presented at a TEDx talk. Boyan Slat, the 18-year-old presenter, had learned that cleaning up the tiny particles of plastic in the ocean could take nearly 80,000 years. Because of the volume of plastic spread through the water, and because it is constantly moving with currents, trying to chase it with nets would be a losing proposition. Slat instead proposed using that movement as an advantage: With a barrier in the water, he argued, the swirling plastic could be collected much more quickly. Then it could be pulled out of the water and recycled.

Some scientists have been skeptical that the idea is feasible. But Slat, undeterred, dropped out of his first year of university to pursue the concept, and founded a nonprofit to create the technology, The Ocean Cleanup, in 2013. The organization raised $2.2 million in a crowdfunding campaign, and other investors, including Salesforce CEO Marc Benioff, brought in millions more to fund research and development. By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic back from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch–around 40,000 metric tons–within five years.


I have to say - that I love this paradox of sensorial goodness and gustatorial wonder.

A Paean to PB&P

Why a peanut butter and pickle sandwich is the totally not-gross snack you need in your mouth right now.
Dwight Garner, an accomplished New York Times book critic, can count himself a member of the rarefied club of journalists whose writing has actually moved hearts and minds on a topic of great importance. In one 2012 article, he changed my life, intimately and permanently, with an ode to an object I’d never previously considered with the solemnity it deserves: the peanut butter and pickle sandwich.

When I clicked on Garner’s piece “Peanut Butter Takes On an Unlikely Best Friend” in October 2012, it was with great skepticism. I expected to be trolled with outrageous, unsupported assertions and straw-man arguments. Instead, I found myself drawn in by lip-smacking prose and a miniature history lesson. Peanut butter and pickle sandwiches were a hit at Depression-era lunch counters, I learned, and in cookbooks from the 1930s and ’40s, which recommended they be crafted with pickle relish rather than slices or spears. Garner quoted the founder of a peanut butter company, who remarked that the savory-and-sour flavor profile of the sandwich is more common in South and East Asian cuisines. This observation was my eureka moment: One of my favorite Thai dishes, papaya salad, traditionally combines raw peanuts with a lime and rice vinegar–based dressing. Perhaps the sandwich I’d only ever imagined in the context of stupid jokes about pregnancy cravings could be equally delicious.

Thursday, April 26, 2018

Friday Thinking 27 April 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes:

Articles:



“The problem of the smart city has been that when you start with technology without a strong idea of why you are deploying the technology and for what kind of needs, then you only end up solving technology problems,” says Bria. “Every vendor has a vertical business model so in Barcelona we ended up with problems such as sensors in the pavement that didn’t talk to the lighting or connect with other sensors so there was inoperability, yes, but we also had business model lock-in. You end up outsourcing critical urban services to big providers without being able to shift from one provider to another and without being able to be in control of the data, and even knowing who owns what.”

Bria says such lock-in not only threatens the solvency of cities–because you are tied in to maintenance contracts for systems which cannot be scaled–but it also stifles innovation particularly in terms of the local economy. Digital innovation and support for Barcelona’s 13,000 tech companies is the second pillar of Bria’s strategy.

“We are creating an open digital marketplace to make procurement more transparent so small companies should be able to come on board and compete in a fair way with the big players,” explains Bria.

How Barcelona’s smart city strategy is giving ‘power to the people’




Retail has been in a constant state of renewal since the earliest days of commerce. Artisans were disrupted by merchants, who were disrupted by bazaars and spice-route traders. Pushcarts disrupted stand-alone stores. The Sears Roebuck catalog of 1893 disrupted the first era of brick-and-mortar retail. Malls disrupted the town square; superstores and category-killers disrupted the local five-and-dime. And now, off-price, dollar stores, fast-fashion and online players are shaking up the industry. Through it all, retail has survived.

The disrupter du jour is, of course, Amazon — which collected more than 40 cents of every dollar spent online last year.

Why We Should Be Optimistic About Retail




But asking whether something is a particle or a wave is like asking whether Hobbes from the comic strip Calvin and Hobbes was an anthropomorphic tiger (as Calvin saw him) or a stuffed tiger toy (as everyone else saw him). Clearly, in the world of the comic strip, Hobbes was simply Hobbes: he had properties like anthropomorphic tigers and properties like stuffed animals, but he was his own thing, and fitting him into either category would miss some of Hobbes’s important properties, his intrinsic Hobbes-ness. Likewise, we should not ask whether an electron is a particle or a wave; it is its own thing. In particular, electrons are excitations of a particular quantum field—the electron field. In some situations an excitation can look like a particle, and in others it can act like a wave, but to force a quantum field into either of these categories would miss some of its intrinsic properties.

This property of quantum fields explains a lot. Why, for example, does every electron in the universe have the same mass of 510,998,910 electron volts? Well, any experiment takes some time to perform, so the electrons we can measure are electrons that have stuck around long enough to be measured—they are the excitations that didn’t die away quickly. This means that these particular waves in the quantum field are set up just right so that the energy and momentum sum up correctly. Thus, when we measure their properties, they’ll have the “mass” that is set by the field’s intrinsic properties. These properties are the same everywhere, as the field permeates the entire Universe. So every electron we can find and measure is an excitation of the same field, and thus has the same mass. There can be electron field excitations that don’t have the right mass, but these don’t last long enough for us to weigh them. If you have access to a particle collider though, you might be able to set up some of these short-lived excitations and demonstrate that particles can indeed exist with the “wrong” mass, albeit briefly.

The Hitchhiker’s Guide to Quantum Field Theory





First the bad news - Big Brother, Corporate Leery Uncles, Little Siblings from the neighborhood - the world of Big Data, surveillance, sousveillance and more. This is something to watch. This signals the shadow of the meme of leadership, of power that corrupts and the absolute corruption from absolute power.
“Nefarious ideas became trivial to implement; everyone’s a suspect, so we monitored everything. It was a pretty terrible feeling.”

Palantir Knows Everything About You

Peter Thiel’s data-mining company is using War on Terror tools to track American citizens. The scary thing? Palantir is desperate for new customers.
High above the Hudson River in downtown Jersey City, a former U.S. Secret Service agent named Peter Cavicchia III ran special ops for JPMorgan Chase & Co. His insider threat group—most large financial institutions have one—used computer algorithms to monitor the bank’s employees, ostensibly to protect against perfidious traders and other miscreants.

Aided by as many as 120 “forward-deployed engineers” from the data mining company Palantir Technologies Inc., which JPMorgan engaged in 2009, Cavicchia’s group vacuumed up emails and browser histories, GPS locations from company-issued smartphones, printer and download activity, and transcripts of digitally recorded phone conversations. Palantir’s software aggregated, searched, sorted, and analyzed these records, surfacing keywords and patterns of behavior that Cavicchia’s team had flagged for potential abuse of corporate assets. Palantir’s algorithm, for example, alerted the insider threat team when an employee started badging into work later than usual, a sign of potential disgruntlement. That would trigger further scrutiny and possibly physical surveillance after hours by bank security personnel.

Over time, however, Cavicchia himself went rogue. Former JPMorgan colleagues describe the environment as Wall Street meets Apocalypse Now, with Cavicchia as Colonel Kurtz, ensconced upriver in his office suite eight floors above the rest of the bank’s security team. People in the department were shocked that no one from the bank or Palantir set any real limits. They darkly joked that Cavicchia was listening to their calls, reading their emails, watching them come and go. Some planted fake information in their communications to see if Cavicchia would mention it at meetings, which he did.

It all ended when the bank’s senior executives learned that they, too, were being watched, and what began as a promising marriage of masters of big data and global finance descended into a spying scandal. The misadventure, which has never been reported, also marked an ominous turn for Palantir, one of the most richly valued startups in Silicon Valley. An intelligence platform designed for the global War on Terror was weaponized against ordinary Americans at home.


This is an important piece for everyone - links to the playbook are provided.

An Online Security Playbook for Everyone

A spotlight on Citizen Lab, a Ford-Mozilla Open Web Fellowship host organization
Resolving to better protect your online privacy and security is easy. But acting on that resolution can be a challenge.

Do you need a VPN? If so, which one? What browser settings should you enable or disable? And what’s your password strategy?
The number of tactics for staying safe online can be dizzying. So to help users better navigate the world of digital security, Citizen Lab — the cyberspace R&D department at University of Toronto — built Security Planner.

“If you Google How do I stay safe online, there’s a ton of advice out there. A lot of it’s really conflicting or super technical,” says Christine Schoellhorn, the Product Manager for Security Planner. “We wanted to reduce those barriers by making advice that was personalized, trustworthy, and highly usable.”

Rather than overwhelming users with a lengthy list of tools and literature, Security Planner starts with a simple survey, and then delivers personalized recommendations.


This is a good signal of the inevitability of institutional innovation for the digital environment.  A number of institutional innovations will have to involve protections related to data and AI - as I’ve noted before - we should be moving to create an arm’s length agency - an Auditor General of Algorithms (and associated technologies) - to ensure that AI-ssistants are actually performing as they should.
at the M.I.T. Media Lab, we are starting to refer to such technology as “extended intelligence” rather than “artificial intelligence.” The term “extended intelligence” better reflects the expanding relationship between humans and society, on the one hand, and technologies like AI, blockchain, and genetic engineering on the other. Think of it as the principle of bringing society or humans into the loop.

A.I. Engineers Must Open Their Designs To Democratic Control

When it comes to A.I., we need to keep humans in the loop.
In many ways, the most pressing issues of society today — increasing income disparity, chronic health problems, and climate change — are the result of the dramatic gains in higher productivity we’ve achieved with technology and science. The internet, artificial intelligence, genetic engineering, crypto-currencies, and other technologies are providing us with ever more tools to change the world around us.

But there is a cost.
We’re now awakening to the implications that many of these technologies have for individuals and society. We can directly see, for instance, the effect artificial intelligence and the use of algorithms have on our lives, whether through the phones in our pockets or Alexa on our coffee table. AI is now making decisions for judges about the risks that someone accused of a crime will violate the terms of his pretrial probation, even though a growing body of research has shown flaws in such decisions made by machines. An AI program that set school schedules in Boston was scrapped after outcry from working parents and others who objected to its disregard of their schedules.


This is a very important signal of the transformation of our larger narrative of the ‘isolated, atomistic, selfish, individual’. A must read - for anyone interested in the social roots of wellbeing.

Human Beings Are Wired For Morality

Though our news coverage paints a grim picture, we have grounds for optimism
When it comes to well-being, it seems to matter greatly how it was sought, and why it was sought. According to research from the University of North Carolina at Chapel Hill, our cells react positively to well-being produced through actions motivated by a noble purpose, while well-being generated as a consequence of mere self-gratification is correlated with long-term negative outcomes. Apparently, “feeling connected to a larger community through a service project” is associated with a decrease in a type of negative stress-induced gene expression.

The view that we’re wired for morality is not a new one, though certainly much of the scientific evidence for it is. Long before anyone had heard of genetics, Adam Smith and Charles Darwin both articulated, each in their own way, why social animals would necessarily behave more morally than immorally.

The University NC research mentioned is described here

Human cells respond in healthy, unhealthy ways to different kinds of happiness

The sense of well-being derived from "a noble purpose" may provide cellular health benefits, whereas "simple self-gratification" may have negative effects, despite an overall perceived sense of happiness, researchers found. "A functional genomic perspective on human well-being" was published July 29 in Proceedings of the National Academy of Sciences.

Past work by Cole and colleagues had discovered a systematic shift in gene expression associated with chronic stress, a shift "characterized by increased expression of genes involved in inflammation" that are implicated in a wide variety of human ills, including arthritis and heart disease, and "decreased expression of genes involved in … antiviral responses," the study noted. Cole and colleagues coined the phrase "conserved transcriptional response to adversity" or CTRA to describe this shift. In short, the functional genomic fingerprint of chronic stress sets us up for illness, Fredrickson said.

But if all happiness is created equal, and equally opposite to ill-being, then patterns of gene expression should be the same regardless of hedonic or eudaimonic well-being. Not so, found the researchers.

Eudaimonic well-being was, indeed, associated with a significant decrease in the stress-related CTRA gene expression profile. In contrast, hedonic well-being was associated with a significant increase in the CTRA profile. Their genomics-based analyses, the authors reported, reveal the hidden costs of purely hedonic well-being.


This is a good signal of the growing dissatisfaction with the current state of scientific publication - and perhaps a good indication of a new phase. For anyone who hasn’t heard of Bret Victor - this is a must read.
if science was to be an open enterprise, the tools that are used to do it should themselves be open. Commercial software whose source code you were legally prohibited from reading was “antithetical to the idea of science,” where the very purpose is to open the black box of nature.

At every turn, IPython chose the way that was more inclusive, to the point where it’s no longer called “IPython”: The project rebranded itself as “Jupyter” in 2014 to recognize the fact that it was no longer just for Python.

There are now 1.3 million of these notebooks hosted publicly on Github. They’re in use at Google, Bloomberg, and nasa; by musicians, teachers, and AI researchers; and in “almost every country on Earth.”

The Scientific Paper Is Obsolete

Here’s what’s next.
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”

The paper announcing the first confirmed detection of gravitational waves was published in the traditional way, as a PDF, but with a supplemental IPython notebook. The notebook walks through the work that generated every figure in the paper. Anyone who wants to can run the code for themselves, tweaking parts of it as they see fit, playing with the calculations to get a better handle on how each one works. At a certain point in the notebook, it gets to the part where the signal that generated the gravitational waves is processed into sound, and this you can play in your browser, hearing for yourself what the scientists heard first, the bloop of two black holes colliding.


Not only is the traditional scientific paper obsolete - so is the current publishing paradigm. If science is to continue to advance - scientific knowledge has to be available to everyone.
The French stalemate is the latest in a series of disputes between publishers and universities around the world. In Germany, around 200 institutions have terminated their Elsevier subscriptions in order to put pressure on the publisher during ongoing negotiations for a new nationwide licensing agreement. Discussions between Elsevier and Project Deal, an alliance of German universities and research institutions, have made little progress since they began in 2016.

French Universities Cancel Subscriptions to Springer Journals

Negotiations between the publisher and a national consortium of academic institutions have reached a stalemate.
French research organizations and universities have cancelled their subscriptions to Springer journals, due to an impasse in fee negotiations between the publisher and Couperin.org, a national consortium representing more than 250 academic institutions in France.

After more than a year of discussions, Couperin.org and SpringerNature, which publishes more than 2,000 scholarly journals belonging to Springer, Nature, and BioMedCentral, have failed to reach an agreement on subscriptions for its Springer journals. The publisher’s proposal includes an increase in prices, which the consortium refuses to accept.

Although Couperin.org and its members were expecting the publisher to cut access to Springer journals on April 1, a SpringerNature spokesperson tells The Scientist that the publisher will continue to provide French institutions with access to its journals while discussions continue. “Springer Nature is disappointed by Couperin’s decision. . . . It is with regret that our concessions have been deemed to be insufficient,” the spokesperson writes. “[As] requested by Couperin, we are considering a further proposal and during this time access will remain open.”


This is a great signal for accelerating advances on our understanding and imaging of biological processes. The 10 videos are awesome - well worth the view.

New Microscope Captures Detailed 3-D Movies of Cells Deep Within Living Systems

Merging lattice light sheet microscopy with adaptive optics reveals the most detailed picture yet of subcellular dynamics in multicellular organisms.
By combining two imaging technologies, scientists can now watch in unprecedented 3-D detail as cancer cells crawl, spinal nerve circuits wire up, and immune cells cruise through a zebrafish’s inner ear.

Physicist Eric Betzig, a group leader at the Howard Hughes Medical Institute’s Janelia Research Campus, and colleagues report the work April 19, 2018, in the journal Science.

Scientists have imaged living cells with microscopes for hundreds of years, but the sharpest views have come from cells isolated on glass slides. The large groups of cells inside whole organisms scramble light like a bagful of marbles, Betzig says. “This raises the nagging doubt that we are not seeing cells in their native state, happily ensconced in the organism in which they evolved.”

Even when viewing cells individually, the microscopes most commonly used to study cellular inner workings are usually too slow to follow the action in 3-D. These microscopes bathe cells with light thousands to millions of times more intense than the desert sun, Betzig says. “This also contributes to our fear that we are not seeing cells in their natural, unstressed form.


This is a nice summary of the significant impact that the discovery of just how widespread horizontal gene transfer is.
For example the reconceptualization of the 'tree of life' to the 'entangled web'  that can’t be reconstructed accurately. Gene transfers have blurred the lines between species too much, they claim, and the genes in any genome can tell too many different stories to usefully reveal the relationships among diverse organisms.
This is worth the read for anyone interested in the evolution of the theory of evolution.
And so work on the tree of life has tended to disregard gene transfers, defining them as noise. But the two teams behind this month’s set of papers say we shouldn’t be so quick to dismiss them. “If you account for gene transfers in the right way,” said Vincent Daubin, a biologist at the University of Lyon in France and an author of one of the studies, “you realize that there’s a lot to be taken from them: how species are related, when they lived, who they had ecological contact with.”

Chronological Clues to Life’s Early History Lurk in Gene Transfers

To date the branches on the evolutionary tree of life, researchers are looking at horizontal gene transfers among ancient microorganisms, which once seemed only to muddle the record.
Scientists who want to uncover the details of life’s 3.8-billion-year history on Earth find themselves in murky territory as soon as they look earlier than half a billion years ago. Before then, microorganisms dominated the planet, but — unlike the animals and plants that later emerged — they left behind barely any fossils to mark their ancient pasts, and attempts to infer their family trees from their genes have proved frustrating.

But two papers published earlier this month in Nature Ecology & Evolution are poised to bring greater clarity to the study of evolution. One has already provided additional evidence for the role that early life played 3.5 billion years ago. The key to their success lay in finding ways to exploit what many researchers have regarded as an obstacle to progress rather than a tool.

Traditionally, scientists have relied on “rocks and clocks” to date the tree of life: fossil evidence from the geological record, and “molecular clock” estimates that infer how long ago related species diverged by analyzing the rate at which mutations built up in their genomic sequences. But there’s a problem: Molecular clock rates differ between lineages — they’re much faster in rodents than in primates, for instance — so if they’re not calibrated against fossil data, they can lead to the wrong conclusions.

That’s why the spotty fossil record for the vast majority of evolutionary history is such a problem. To make matters even more complicated, that history is characterized by horizontal gene transfer, a process in which microorganisms integrate genes from distantly related species into their own genomes, rather than inheriting them vertically from their parent cell. (One effect of horizontal gene transfer can be seen in the spread of antibiotic resistance among bacteria in the past century.) Over billions of years, it’s been such a dominant evolutionary force among microbes that some scientists have abandoned the metaphor of a tree of life in favor of a tangled web that can’t be reconstructed accurately. Gene transfers have blurred the lines between species too much, they claim, and the genes in any genome can tell too many different stories to usefully reveal the relationships among diverse organisms.


Here’s some great news from the serendipity and domestication of DNA frontier.

Scientists accidentally create mutant enzyme that eats plastic bottles

The breakthrough, spurred by the discovery of plastic-eating bugs at a Japanese dump, could help solve the global plastic pollution crisis
Scientists have created a mutant enzyme that breaks down plastic drinks bottles – by accident. The breakthrough could help solve the global plastic pollution crisis by enabling for the first time the full recycling of bottles.

The new research was spurred by the discovery in 2016 of the first bacterium that had naturally evolved to eat plastic, at a waste dump in Japan. Scientists have now revealed the detailed structure of the crucial enzyme produced by the bug.

The international team then tweaked the enzyme to see how it had evolved, but tests showed they had inadvertently made the molecule even better at breaking down the PET (polyethylene terephthalate) plastic used for soft drink bottles. “What actually turned out was we improved the enzyme, which was a bit of a shock,” said Prof John McGeehan, at the University of Portsmouth, UK, who led the research. “It’s great and a real finding.”


On the same wavelength here’s another signal.
The company’s first commercial products are focused on improving drought tolerance, one of the most difficult traits to address through genetic modification. “It’s like a symphony,” founder von Maltzahn says of a plant’s reaction to water stress, “and GMOs are like slamming down on one note on one instrument.” Drought conditions are likely to become a greater threat to agriculture because of global warming.
microbe coatings have boosted cotton yields by an average of 14 percent in full-scale commercial trials in Texas and wheat yields by as much as 15 percent in Kansas.

Scientists Want to Replace Pesticides With Bacteria

Indigo’s microbes could change Big Agriculture forever.
In humans, a healthy microbiome—the universe of bacteria, fungi, and viruses that lives inside all of us—is increasingly recognized as critical to overall health. The same is true of the plant world, and Indigo is among the dozen or so agricultural technology startups trying to take advantage of the growing scientific consensus. Their work is enabled by advances in machine learning and a steep reduction in the cost of genetic sequencing, used by companies to determine which microbes are present.

Approaches vary: AgBiome LLC, with funding from the Bill & Melinda Gates Foundation, is studying how microbes can help control sweet potato weevils in Africa, while Ginkgo Bioworks Inc. announced a $100 million joint venture with Bayer AG to explore how microbes can encourage plants to produce their own nitrogen.

Indigo is the best-funded of the bunch, having raised more than $400 million. To develop its microbial cocktails, Indigo agronomists comb through normal fields in dry conditions to see which plants seem healthier than average. They take samples of the thriving plants and “fingerprint” their micro­biomes using genetic sequencing; once they’ve done this with thousands of samples, they use statistical methods to pick out which microbes occur most often in the healthiest plants. These proceed to testing, then large-scale field trials.


This is an important signal of the growing awareness of our microbial and viral ecosystems not just as regular participants - but as agents of evolution in their own right - via horizontal gene transfer.

Trillions Upon Trillions of Viruses Fall From the Sky Each Day

Scientists have surmised there is a stream of viruses circling the planet, above the planet’s weather systems but below the level of airline travel. Very little is known about this realm, and that’s why the number of deposited viruses stunned the team in Spain. Each day, they calculated, some 800 million viruses cascade onto every square meter of the planet.

Most of the globe-trotting viruses are swept into the air by sea spray, and lesser numbers arrive in dust storms.
“Unimpeded by friction with the surface of the Earth, you can travel great distances, and so intercontinental travel is quite easy” for viruses, said Curtis Suttle, a marine virologist at the University of British Columbia. “It wouldn’t be unusual to find things swept up in Africa being deposited in North America.”

The study by Dr. Suttle and his colleagues, published earlier this year in the International Society of Microbial Ecology Journal, was the first to count the number of viruses falling onto the planet. The research, though, is not designed to study influenza or other illnesses, but to get a better sense of the “virosphere,” the world of viruses on the planet.

Generally it’s assumed these viruses originate on the planet and are swept upward, but some researchers theorize that viruses actually may originate in the atmosphere. (There is a small group of researchers who believe viruses may even have come here from outer space, an idea known as panspermia.)

Mostly thought of as infectious agents, viruses are much more than that. It’s hard to overstate the central role that viruses play in the world: They’re essential to everything from our immune system to our gut microbiome, to the ecosystems on land and sea, to climate regulation and the evolution of all species. Viruses contain a vast diverse array of unknown genes — and spread them to other species.

Last year, three experts called for a new initiative to better understand viral ecology, especially as the planet changes. “Viruses modulate the function and evolution of all living things,” wrote Matthew B. Sullivan of Ohio State, Joshua Weitz of Georgia Tech, and Steven W. Wilhelm of the University of Tennessee. “But to what extent remains a mystery.”


This is a good signal of not only the phase transition of global energy geopolitics but also of the emergence of a new economic narrative to displace the increasingly outmoded neoliberal narrative.

China made solar panels cheap. Now it’s doing the same for electric buses.

It is now more or less taken for granted that solar panels are getting cheaper and cheaper. But that didn’t just happen — solar PV did not jump onto that trajectory on its own. After all, solar panels have been around for decades, but they didn’t really start plunging down the cost curve until the mid- to late-2000s.

Germany deserves some credit for creating demand with its aggressive feed-in tariffs. President Barack Obama and the Democrats deserve some credit for creating demand with the 2009 stimulus bill. But the lion’s share of credit goes to China, which, rather than fiddling with tax breaks and credits and “market mechanisms,” invested a boatload of money into production subsidies, scaling the industry up by brute force.

China’s wild binge of solar manufacturing drove down the costs of panels, both by oversupplying the market and by hastening economies of scale. In effect, the country voluntarily took on the costs of pushing solar panels onto the “S-curve” of rapid growth, a strategy that will greatly benefit the Chinese — and the rest of humanity.

Now there’s evidence that China is in the midst of doing the same thing for another key clean-energy product: electric buses.


This is an interesting 4 min video - a signal of the looming potential to change mass transit and urban transportation in the next couple of decades.

Infrastructure to Integrate and Scale Air Taxi Services in Cities

Watch how Volocopter imagines the future of urban transport to look like. Have a look at the day to day operations transporting up to 10.000 passengers a day with a single point to point connection. Inside the system you can see how you alight, batteries are charged, where the aircrafts are stored and how you will embark and take off. This infrastructure system allows urban air taxis to offer a comprehensive mobility network across the largest cities.


The arms race of security - here’s a signal of using AI against malware.
“It’s a game of whack-a-mole,” says Charles Nicholas, a computer science professor at the University of Maryland, Baltimore County.

With this tool, AI could identify new malware as readily as it recognizes cats

A huge data set will help train algorithms to spot the nasty programs hiding in our computers.
From ransomware to botnets, malware takes seemingly endless forms, and it’s forever proliferating. Try as we might, the humans who would defend our computers from it are drowning in the onslaught, so they are turning to AI for help.

There’s just one problem: machine-learning tools need a lot of data. That’s fine for tasks like computer vision or natural-language processing, where large, open-source data sets are available to teach algorithms what a cat looks like, say, or how words relate to one another. In the world of malware, such a thing hasn’t existed—until now.

This week, the cybersecurity firm Endgame released a large, open-source data set called EMBER (for “Endgame Malware Benchmark for Research”). EMBER is a collection of more than a million representations of benign and malicious Windows-portable executable files, a format where malware often hides. A team at the company also released AI software that can be trained on the data set. The idea is that if AI is to become a potent weapon in the fight against malware, it needs to know what to look for.

EMBER is meant to help automated cybersecurity programs keep up.
Instead of a collection of actual files, which could infect the computer of any researcher using them, EMBER contains a kind of avatar for each file, a digital representation that gives an algorithm an idea of the characteristics associated with benign or malicious files without exposing it to the genuine article.

This should help those in the cybersecurity community quickly train and test out more algorithms, enabling them to construct better and more adaptable malware-hunting AI.


This is an interesting piece with 8 very short videos illustrating the robots/AI at work.
For all the accusations that I've received about being too theoretical - I have always love IKEA furniture and have always found their instruction brilliantly simple and very easy to understand. - In fact after a few pieces of furniture - their paradigm or user-interface with pre-formed matter - becomes ever easier to use.
I find this a very interesting signal for the rapid emergence of an AI-Robot workforce looming in the near future.

Robots Continue Attempting to Master Ikea Furniture Assembly

These robots are slow, careful, and successful, making them way better than humans at assembling an Ikea chair
Apparently, one of the standards by which we should be measuring the progress of useful robotic manipulation is through the assembly of Ikea furniture. With its minimalistic and affordable Baltoscandian design coupled with questionably creditable promises of effortless assembly, Ikea has managed to convince generations of inexperienced and desperate young adults (myself included) that we can pretend to be grownups by buying and putting together our own furniture. It’s never as easy as that infuritatingly calm little Ikea manual dude makes it look, though, and in terms of things we wish robots would solve, Ikea furniture assembly has ended up way higher on the priority list than maybe it should be.

We’ve seen a variety of robotic systems tackle Ikea in the past, but today in Science Robotics is (perhaps for the first time) a mostly off-the-shelf system of a few arms and basic sensors that can put together the frame of a Stefan chair kit autonomously(ish) and from scratch.

This research comes from the Control Robotics Intelligence (CRI) group at NTU in Singapore, and they’ve been working on the whole Ikea chair assembly thing for a while.


From segregated industrial bots to co-workers this is a signal of new working conditions.
“We didn’t expect large manufacturers would want to use such robots, because those robots can lift only a light weight and have limited capabilities,” said Kazuo Hariki, an executive director at Fanuc.

Japanese companies see big things in small-scale industrial robots

A two-armed robot in a Japanese factory carefully stacks rice balls in a box, which a worker carries off for shipment to convenience stores. At another food-packaging plant, a robot shakes pepper and powdered cheese over pasta that a person has just arranged in a container.

In a country known for bringing large-scale industrial robots to the factory floor, such relatively dainty machines have until recently been dismissed as niche and low-margin.

But as workforces age in Japan and elsewhere, collaborative robots - or “cobots” - are seen as a key way to help keep all types of assembly lines moving without replacing humans.

Japan’s Fanuc and Yaskawa Electric, two of the world’s largest robot manufacturers, didn’t see the shift coming. Now they are trying to catch up.


Here’s a great signal for DIY kits for teens.

Google's latest do-it-yourself AI kits include everything you need

There's a Raspberry Pi in the box, plus a new Android app.
Google's AIY kits have been helpful for do-it-yourselfers who want to explore AI concepts like computer vision, but they weren't really meant for newcomers when you had to supply your own Raspberry Pi and other must-haves. It'll be much easier to get started from now on: Google has released updated AIY Vision and AIY Voice kits that include what you need to get started. Both include a Raspberry Pi Zero WH board and a pre-provisioned SD card, while the Vision Kit also throws in a Raspberry Pi Camera v2. You won't be going on extra shopping trips (or downloading software) just to get the ball rolling.

At the same time, Google is promising more help when you're ready to get cracking. A companion Android app helps with setting up your kit, and the AIY website itself has been revamped with clearer instructions aimed at younger creators. The kits should now be better-suited to STEM students, not just tinkerers willing to dive in feet-first.

Both the Vision ($90) and Voice ($50) packs are reaching Target's online and retail stores in April, and they'll be available through other stores around the globe. That's definitely a price hike, but it's also a realistic price hike -- you're now paying for everything up front. In that sense, they're kinder to parents and anyone else who might not always read the fine print.


Cosmetic surgery was born helping soldiers who were disfigured in WWI and quickly expanded to many other domains including enhancing beauty. In the world of sex reallocation surgeries the maxim has been - it’s easier to make a hole than build a pole. This surgical success may move well beyond this original purpose.

Johns Hopkins Performs First Total Penis and Scrotum Transplant in the World

Many soldiers returning from combat bear visible scars, or even lost limbs, caused by blasts from improvised explosive devices, or IEDs. However, some servicemen also return with debilitating hidden injuries — the loss of all or part of their genitals. Now, the Johns Hopkins reconstructive surgery team that performed the country’s first bilateral arm transplant in a wounded warrior has successfully performed the first total penis and scrotum transplant in the world.

A team of nine plastic surgeons and two urological surgeons was involved in the 14-hour surgery on March 26. They transplanted from a deceased donor the entire penis, scrotum (without testicles) and partial abdominal wall.


This is very very interesting - a signal of an early domestication of fire?

Australian raptors start fires to flush out prey

In the first recorded instance of fire being used by animals other than humans, three Australian birds of prey species have been seen carrying burning twigs to set new blazes.
Australian Aboriginal lore is replete with references to birds carrying fire, and some traditional ceremonies even depict the behaviour. Now ornithologists have collected accounts from witnesses across the savannas of Australia’s far north, known as the Top End, suggesting three Australian birds of prey species use smouldering branches to spread fires and scare prey into their waiting talons.

Black kites (Milvus migrans), whistling kites (Haliastur sphenurus) and brown falcons (Falco berigora) all regularly congregate near the edges of bushfires, taking advantage of an exodus of small lizards, mammals, birds and insects – but it appears that some may have learnt not only to use fire to their advantage, but also to control it.