Thursday, September 15, 2016

Friday Thinking 16 Sept. 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Content
Quotes

Articles
Chinese lecturer to use facial-recognition technology to check boredom levels among his students


Our understanding of technology may be advancing at an ever-accelerating rate, but our knowledge of these more vague concepts -- intelligence, consciousness, what the human mind even is -- remains in a ridiculously infantile stage. Technology may be poised to usher in an era of computer-based humanity, but neuroscience, psychology and philosophy are not. They're universes away from even landing on technology's planet, and these gaps in knowledge will surely drag down the projected AI timeline.

Most experts who study the brain and mind generally agree on at least two things: We do not know, concretely and unanimously, what intelligence is. And we do not know what consciousness is.

"To achieve the singularity, it isn't enough to just run today's software faster," Microsoft co-founder Paul Allen wrote in 2011. "We would also need to build smarter and more capable software programs. Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this."

Defining human intelligence and consciousness is still more philosophy than neuroscience.

We don't understand AI because we don't understand intelligence




GOOGLE CALLS IT Project Sand Hill.
Since 2012, Suman Prasad and his team have worked with various Silicon Valley venture capital firms to identify “rocketship” startups before they really take off, and they help plug them into the Google machine. They help them build apps for Android phones, hook into Android Pay, and make use of countless other Google services, from Google Maps to Google ads. Prasad started the project in his Google “20 Percent Time,” but it has since grown into something much bigger. He’s now director of startups and VC partnerships, and at any given time, Project Sand Hill now serves a good 100 US startups, plus about 30 abroad, including places like Israel, India, and China.

Current members of the program include ticketing platform Eventbrite, fitness-focused My Fitness Pal, and the last-minute hotel booking company Hotel Tonight. And according to Google, eleven Project Sand Hill companies have become “unicorns”—-startups valued at over a billion dollars—since joining the program, including Eventbrite, Houzz and Lyft. About half of the participating companies have gone on to raise an additional $7.5 billion in funding.

Project Sand Hill: Google’s Unknown Campaign to Track World’s Hottest Startups




This is only a web site right now - but for anyone interested in new social media platforms and the blockchain - you can sign up for the alpha release of a social media platform based on the blockchain. This is likely to be at least a worthy experiment - especially for anyone who has been awakened to the huge value and use of a social media platform but are now tired of being Facebook’s product for advertisers.
You can publish, share and vote for entries, similar to Medium and other modern publishing platforms, with the difference that your content is actually published over a decentralized network rather than on our servers. Moreover, the votes are bundled with ETH micro transactions so if your content is good you’ll make ETH from it – in a way, mining with your mind.
Similar to how you don’t need to understand the concepts behind electricity in order to flip a switch and turn on the lights, we created an uncluttered user experience focused on creating and publishing content easily. All the complicated stuff happens in the background.

AKASHA - A Next-Generation Social Media Network

Powered by the Ethereum world computer
Embedded into the Inter-Planetary File System
AKASHA ( [aːkaːʃə], आकाश) is the Sanskrit word meaning “ether” in both its elemental and metaphysical senses.
The ancient Sanskrit-speaking civilization envisioned akasha as a metaphysical information network connecting humanity with itself and infinite knowledge. In this paradigm, thoughts, ideas, feelings, and experiences are stored forever and shared through the ether, which acts as an universal field connecting multiple planes of existence.

Thousands of years later the Internet was created, inventing in the process a new way to transmit and store thoughts, feelings, ideas, and experiences – this time as bits accessible to large numbers of people that are part of the same invisible network connecting billions of minds.

As a decentralized application AKASHA deploys a next-generation information architecture born from the fusion of ancient wisdom with new technologies such as Ethereum and the Inter-Planetary File System. With AKASHA your thoughts and ideas will echo throughout humanity’s existence, thanks to a planetary-scale information network immune to censorship by design.


This is a MUST VIEW 18 min video for anyone concerned with the future of education, with the power of video games, with the future of Virtual Environments to augment our learning and mastery in real world system. How can we combine the ever increasing power to create models of the world with engaging and accessible interfaces to accelerate and scale learning.

"Videos games in teaching and learning systems" - James Paul Gee

Play performs an important role in children's intellectual, affective and social development and as such is regarded by some as the most “serious” activity undertaken by children. Play, at all ages, is an engaging way of fostering essential skills such as focus, creativity, collaboration and persistence. So why is play not exploited more widely in education beyond kindergarten? Can experiences that use play for learning and creativity inspire us to rethink the role of play in education?

James Paul Gee is the Mary Lou Fulton Presidential Professor of Literacy Studies and Regents’ Professor at Arizona State University. He is a member of the National Academy of Education.

He is the author of several books such as "Sociolinguistics and Literacies" and "An Introduction to Discourse Analysis". His most recent books have dealt with video games, language, and learning.


This is a must view video about the present, the future and foresight methods.

Jerome C. Glenn on Singularity 1 on 1: Science is an epistemology in the house of philosophy

Jerome C. Glenn is co-founder and Director of The Millennium Project and I had great fun talking to him during our first interview. But it has been over two years since our previous conversation and so, when Jason Ganz reminded me that the latest State of the Future report has been out for several months now, I jumped at the opportunity to have Jerome back on Singularity 1 on 1.


This is a short summary of the current situation and trends in the transformation of energy geo-politics. The graphics speak for themselves.
… if solar electricity continues on its current demonetization trajectory, by the time solar capacity triples to 600GW (by 2020 or 2021, as a rough estimate), we could see global unsubsidized solar prices that are roughly half the cost of coal and natural gas.
By roughly 2030, EVs with a 200+ mile range are going to be cheaper than the cheapest car sold in the U.S. in 2015.

Disrupting Energy

We are at the cusp of an energy revolution.
This blog is a look at how three technologies — solar, batteries and electric vehicles (EVs) — are poised to disrupt a $6 trillion energy industry over the next two decades.

I had the chance to sit down with Ramez Naam, the Chair of Energy & Environmental Systems at Singularity University and acclaimed author of the Nexus series, to discuss these major forces and their implications.

In 88 minutes, 470 exajoules of energy from the sun hits the Earth’s surface, as much energy as humanity consumes in a year.
In 112 hours — less than five days — it provides 36 zettajoules of energy. That’s as much energy as is contained in all proven reserves of oil, coal and natural gas on the planet.
If humanity could capture 1 part in 1,000 (one-tenth of one percent) of the solar energy striking the Earth — just one part in one thousand — we could have access to six times as much energy as we consume in all forms today.

Over the last 30 years, solar module prices have dropped by a factor of 100.
Critically — a new solar price record was set in Chile just a few weeks ago at $0.0291 per kWh — 58 percent less than the price of natural gas from a new plant!
And this is just the beginning. How cheap can it get?


This is an important consideration when trying to assess the very difficult plight of incumbents on whether to fall prey to ‘sunk costs’ or to cut their losses and shift to other investments. We may have to be vigilant about not bailing out banks again. :)

Creditors lose historic sums in oil bankruptcies, Moody’s says

For the lenders that bankrolled the shale boom, the oil-market crash may leave as much financial wreckage behind as the devastating telecom bust in the early 2000s, Moody’s Investors Service said.

On average, banks and bond investors have recovered only about $1 of every $5 they poured into the U.S. oil companies that eventually went bankrupt in 2015, according to the credit rating agency.

That amount is about a third of the money creditors historically have pulled out of drillers who default on their debt. It’s slightly less than investors recovered from bankrupt telecom firms in 2002, and “can only be described as catastrophic,” the credit ratings agency said in a new report released Monday.

Creditors aren’t getting much of their money back because the oil companies that went bankrupt last year were mostly small firms that ran up high debts in the heady days of the U.S. shale drilling bonanza and don’t have the assets or access to capital that larger firms do.


WE are not only finding new elements but new ways to make those elements quickly and efficiently.
The pace of innovation is such that even the experts are struggling to keep up, says Scott, who leads the US Department of Energy's efforts to develop benchmarks for the new catalysts' performance1. “We need to make sure we are advancing the science that's most efficient,” she says.
And the scope of catalysis is increasing rapidly. “Twenty years ago,” says John Hartwig, a chemist at the University of California, Berkeley, “catalysis to make molecules that were complex did not exist.” Anyone who wanted to modify a large complicated structure would have to tear it down and build it back up, says Sanford. But now, chemists can often edit parts of a molecule precisely. “It's incredibly enabling,” she says.

The new breed of cutting-edge catalysts

Advances in catalyst research could create a superhighway to clean energy sources and a more-sustainable chemical industry
Catalysts are used in some 90% of processes in the chemical industry, and are essential for the production of fuels, plastics, drugs and fertilizers. At least 15 Nobel prizes have been awarded for work on catalysis. And thousands of chemists around the world are continually improving the catalysts they have and striving to invent new ones.

That work is partly driven by an interest in sustainability. The aim of catalysis is to direct reactions along precisely defined pathways so that chemists can skip reaction steps, reduce waste, minimize energy use and do more with less. And with growing concerns about climate change and the environment, sustainability has become increasingly important. Catalysis is a key principle of 'green chemistry': an industry-wide effort to prevent pollution before it happens.

Catalysts are also seen as the key to unlocking energy sources that are much more inert and difficult to use than coal, oil or gas, but much cleaner. Catalysis can make it more economically feasible to split water into oxygen and hydrogen fuel, or can open up new ways to use raw materials such as biomass or carbon dioxide. “These are feedstocks that are ripe for advances in catalysis,” says Melanie Sanford, a chemist at the University of Michigan in Ann Arbor.


Our understanding of the microbial world is at a new threshold and who know what we will find?
Already, the detection of these newfound organisms is challenging what scientists thought they knew about the chemical processes of biology, the tree of life and the manner in which microbes live and grow. The secrets of microbial dark matter may redefine how life evolved and exists, and even improve the understanding of, and treatments, for many diseases.
“Everything is changing,” says Kelly Wrighton, a microbiologist at Ohio State University in Columbus. “The whole field is full of enthusiasm and discovery.”
new technology came online that gave genetic analysis a turbo boost. Sequencing a genome — the entirety of an organism’s DNA — became faster and cheaper than most scientists ever predicted. With next-generation sequencing, Woyke can analyze more than 100 billion bases in the time it takes to turn around an Amazon order, she says, and for just a few thousand dollars.
“The genetic code is not as rigid as we thought.”

Microbial matter comes out of the dark

Scientists identify bacteria that defy rules of biochemistry
Few people today could recite the scientific accomplishments of 19th century physician Julius Petri. But almost everybody has heard of his dish.

For more than a century, microbiologists have studied bacteria by isolating, growing and observing them in a petri dish. That palm-sized plate has revealed the microbial universe — but only a fraction, the easy stuff, the scientific equivalent of looking for keys under the lamppost.

But in the light — that is, the greenhouse-like conditions of a laboratory — most bacteria won’t grow. By one estimate, a staggering 99 percent of all microbial species on Earth have yet to be discovered, remaining in the shadows. They’re known as “microbial dark matter,” a reference to astronomers’ description of the vast invisible matter in space that makes up most of the mass in the cosmos.

This year in the ISME Journal, Ohio State’s Wrighton reported a study of the enzyme RubisCO taken from a new microbial species that had never been grown in a laboratory. RubisCO, considered the most abundant protein on Earth, is key to photosynthesis; it helps convert carbon from the atmosphere into a form useful to living things. Because the majority of life on the planet would not exist without it, RubisCO is a familiar molecule — so familiar that most scientists thought they had found all the forms it could take. Yet, Wrighton says, “we found so many new versions of this protein that were entirely different from anything we had seen before.”


This is a short article with some potentially good news about water.

How a Sponge, Bubble Wrap and Sunlight Can Lead to Clean Water

With simple materials, MIT researchers have developed a cheap, easy-to-build device to desalinate water and treat wastewater
Researchers at MIT were looking for a way to clean and desalinate water without using expensive specialty materials or devices. What they came up with is, in layman’s terms, a sponge encased in bubble wrap. This “solar vapor generator” can heat water up enough to make it boil, evaporating the water and leaving behind unwanted products like salt.

The most common way to concentrate sunlight and generate heat is with mirrors, says George Ni, a PhD candidate who led the research. But the problem is that mirrors and other optical heat concentrators are often pricey.

“If you’re going to use this for desalinating water in a developing country, it’s really too expensive for most people to afford,” he says.

The solar vapor generator that Ni and his team developed involves a metallic film that can absorb radiation and trap heat. This spectrally selective absorber is mounted on a piece of special sponge made of graphite and carbon foam, which can boil water to 100 degrees Celsius using ambient sunlight. The whole thing is then wrapped in bubble wrap. The bubble wrap allows the sunlight in, but keeps the heat from escaping when the wind blows across the device, making it much more efficient.


From simple mass-produced material to intricately printed objects - this article explains an advance made in 4D printing - using 3D printing with materials that remember.
“Our method not only enables 4-D printing at the micron-scale, but also suggests recipes to print shape-memory polymers that can be stretched 10 times larger than those printed by commercial 3-D printers,” Ge says. “This will advance 4-D printing into a wide variety of practical applications, including biomedical devices, deployable aerospace structures, and shape-changing photovoltaic solar cells.”

3-D printed structures “remember” their shapes

Heat-responsive materials may aid in controlled drug delivery and solar panel tracking.
Engineers from MIT and Singapore University of Technology and Design (SUTD) are using light to print three-dimensional structures that “remember” their original shapes. Even after being stretched, twisted, and bent at extreme angles, the structures — from small coils and multimaterial flowers, to an inch-tall replica of the Eiffel tower — sprang back to their original forms within seconds of being heated to a certain temperature “sweet spot.”

For some structures, the researchers were able to print micron-scale features as small as the diameter of a human hair — dimensions that are at least one-tenth as big as what others have been able to achieve with printable shape-memory materials. The team’s results were published earlier this month in the online journal Scientific Reports.

Nicholas X. Fang, associate professor of mechanical engineering at MIT, says shape-memory polymers that can predictably morph in response to temperature can be useful for a number of applications, from soft actuators that turn solar panels toward the sun, to tiny drug capsules that open upon early signs of infection.


Here’s a great premonition of the emerging world of autonomous drone sensors that will give us real-time data of the state of the planet.
Last summer, working with scientists and engineers from the National Oceanic and Atmospheric Administration, the boats skimmed along the edge of the retreating Arctic ice cap, giving scientists a detailed account of temperature, salinity and ecosystem information that would have been difficult and expensive to obtain in person.
Mr. Jenkins has a much grander vision. He believes the missing piece of the puzzle to definitively comprehend the consequences of global warming is scientific data. He envisions a fleet of thousands or even tens of thousands of his 23-foot sailboats creating a web of sensors across the world’s oceans.

No Sailors Needed: Robot Sailboats Scour the Oceans for Data

Two robotic sailboats trace lawn-mower-style paths across the violent surface of the Bering Sea, off the coast of Alaska. The boats are counting fish — haddock, to be specific — with a fancy version of the fish finder sonar you’d find on a bass fishing boat.

About 2,500 miles away, Richard Jenkins, a mechanical engineer and part-time daredevil, is tracking the robot sailboats on a large projection screen in an old hangar that used to be part of the Alameda Naval Air Station. Now the hangar is the command center of a little company called Saildrone.

At least 20 companies are chasing the possibly quixotic dream of a self-driving car in Silicon Valley. But self-sailing boats are already a real business.


This is an excellent brief article providing an update analysis of the state of the MOOC - as it develops new business models.
Fundamentally MOOCs as a format haven’t changed much over the last five years. What’s really changed is the how they are packaged and promoted.

MOOCs no longer massive, still attract millions

The first ever MOOC I took had 160,000 people signed up for it.
The forums were buzzing with activity. New posts were being added every few minutes. If I had any question at all, it had already been asked and answered by someone else.

But recently I have noticed forum activity and interactions in MOOCs have declined drastically.

This is despite the MOOC user base doubling in 2015. The total number of students who signed up for at least one course had crossed 35 million — up from an estimated 16–18 million in 2014 — according to data collected Class Central, where I work.


This is a significant milestone in the development of an application of AI and machine learning with neural networks. Listening to the brief samples is worth the time - and provides all you need to see that soon our systems will talk to us will very human sounding voices - and even music. The world of computer generated experiences - may no longer need human voices or musicians.

Google’s WaveNet uses neural nets to generate eerily convincing speech and music

Generating speech from a piece of text is a common and important task undertaken by computers, but it’s pretty rare that the result could be mistaken for ordinary speech. A new technique from researchers at Alphabet’s DeepMind takes a completely different approach, producing speech and even music that sounds eerily like the real thing.

Early systems used a large library of the parts of speech (phonemes and morphemes) and a large ruleset that described all the ways letters combined to produce those sounds. The pieces were joined, or concatenated, creating functional speech synthesis that can handle most words, albeit with unconvincing cadence and tone. Later systems parameterized the generation of sound, making a library of speech fragments unnecessary. More compact — but often less effective.

WaveNet, as the system is called, takes things deeper. It simulates the sound of speech at as low a level as possible: one sample at a time. That means building the waveform from scratch — 16,000 samples per second.


AI may be progressing at super-exponential speed - due to both ‘Moore’s Law’ of computation - but even more because of the growth and power of cloud computing and the near zero marginal cost of spreading software advances arising from machine learning. This is an excellent and brief article with a link to the 53 page report and a 15min podcast discussing the report. Well worth the visit for anyone who’s interested in a realistic assessment of the future of AI.

What artificial intelligence will look like in 2030

New report examines how AI might affect urban life
Artificial intelligence (AI) has already transformed our lives — from the autonomous cars on the roads to the robotic vacuums and smart thermostats in our homes. Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security.

The question is, are we ready? Do we have the answers to the legal and ethical quandaries that will certainly arise from the increasing integration of AI into our daily lives? Are we even asking the right questions?

Now, a panel of academics and industry thinkers has looked ahead to 2030 to forecast how advances in AI might affect life in a typical North American city and spark discussion about how to ensure the safe, fair, and beneficial development of these rapidly developing technologies.

“Artificial Intelligence and Life in 2030” is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford University to inform debate and provide guidance on the ethical development of smart software, sensors, and machines. Every five years for the next 100 years, the AI100 project will release a report that evaluates the status of AI technologies and their potential impact on the world.


This is a great article - a wonderful accessible review of our progress in trying to understand the brain and consciousness - but it’s written by an excellent Canadian science fiction writer - so he’s able to add some very interesting questions and speculations. Well worth the read - because the science and technology involved is progress at exponential speeds - the questions posed are questions we should be pondering - for they signal a deep change in the conditions of change.
Consciousness remains mysterious. But there’s no reason to regard it as magical, no evidence of spectral bonds that hold a soul in one head and keep it from leaking into another. And one of the things we do know is that consciousness spreads to fill the space available. Smaller selves disappear into larger; two hemispheres integrate into one. The architectural specifics aren’t even all that important if Tononi is right, if the Cambridge Declaration is anything to go on. You don’t need a neocortex or a hypothalamus. All you need is complexity and a sufficiently fat pipe.

Hive consciousness

New research puts us on the cusp of brain-to-brain communication. Could the next step spell the end of individual minds?
You already know that we can run machines with our brainwaves. That’s been old news for almost a decade, ever since the first monkey fed himself using a robot arm and the power of positive thinking. Nowadays, even reports of human neuroprostheses barely raise an eyebrow. Brain-computer interfaces have become commonplace in everything from prosthetic vision to video games (a lot of video games; Emotiv and NeuroSky are perhaps the best-known purveyors of Mind Control to the gaming crowd) to novelty cat ears that perk up on your head when you get horny.

But we’ve moved beyond merely thinking orders at machinery. Now we’re using that machinery to wire living brains together. Last year, a team of European neuroscientists headed by Carles Grau of the University of Barcelona reported a kind of – let’s call it mail-order telepathy – in which the recorded brainwaves of someone thinking a salutation in India were emailed, decoded and implanted into the brains of recipients in Spain and France (where they were perceived as flashes of light).


Thinking about the consciousness, sensoriums and technology ultimately sparks questions about where the borders of a being really are - that mind extends beyond the container of the brain. It seems that even spiders use the technology of their webs to extend their senses in very rich ways.
Web-dwelling spiders have poor vision and rely almost exclusively on web vibrations for their 'view' of the world. The musical patterns coming from their tuned webs provide them with crucial information on the type of prey caught in the web and of predators approaching, as well as the quality of prospective mates. Spiders carefully engineer their webs out of a range of silks to control web architecture, tension and stiffness, analogous to constructing and tuning a musical instrument.

Tuning the instrument: Spider webs as vibration transmission structures

Two years ago, scientists revealed that, when plucked like a guitar string, spider silk transmits vibrations across a wide range of frequencies, carrying information about prey, mates and even the structural integrity of a web. Now, a new collaboration has confirmed that spider webs are superbly tuned instruments for vibration transmission -- and that the type of information being sent can be controlled by adjusting factors such as web tension and stiffness.


The world of sensors and sensing is also accelerating - here’s a very interesting development - and who knows what new affordances it will enable? There’s a 5 min video as well.

Judging a book through its cover

New computational imaging method identifies letters printed on first nine pages of a stack of paper.
MIT researchers and their colleagues are designing an imaging system that can read closed books.

In the latest issue of Nature Communications, the researchers describe a prototype of the system, which they tested on a stack of papers, each with one letter printed on it. The system was able to correctly identify the letters on the top nine sheets.

“The Metropolitan Museum in New York showed a lot of interest in this, because they want to, for example, look into some antique books that they don’t even want to touch,” says Barmak Heshmat, a research scientist at the MIT Media Lab and corresponding author on the new paper. He adds that the system could be used to analyze any materials organized in thin layers, such as coatings on machine parts or pharmaceuticals.


When art imitates life and then life imitates art - AI and deep learning.
“Video game graphics have actually gotten good enough that you can train on raw data and have it be almost as good as real-world data,” Schmidt continued. Of course, video games aren’t advanced enough to be indistinguishable from reality, and so real images are still preferred. But you can cull so many labelled images from games that their sheer number makes up for the lack of detail in individual images.

Video Games Are So Realistic That They Can Teach AI What the World Looks Like

Thanks to the modern gaming industry, we can now spend our evenings wandering around photorealistic game worlds, like the post-apocalyptic Boston of Fallout 4 or Grand Theft Auto V’s Los Santos, instead of doing things like “seeing people” and “engaging in human interaction of any kind.”

Games these days are so realistic, in fact, that artificial intelligence researchers are using them to teach computers how to recognize objects in real life. Not only that, but commercial video games could kick artificial intelligence research into high gear by dramatically lessening the time and money required to train AI.

Schmidt works with machine learning, a technique that allows computers to “train” on a large set of labelled data—photographs of streets, for example—so that when let loose in the real world, they can recognize, or “predict,” what they’re looking at. Schmidt and Alireza Shafaei, a PhD student at UBC, recently studied Grand Theft Auto V and found that self-learning software trained on images from the game performed just as well, and in some cases even better, than software trained on real photos from publicly available datasets.


There could be two very important demographic groups that will surge toward the use of the self-driving car and more importantly to new models mass transit based on this technology - The Boomers and the Millennials.
Today however, older teens and young adults don't need cars to achieve a sense of self and freedom. This generation’s coming of age consisted of graduating from the Internet and CD-ROM computer games to hand-held mobile devices where they’re establishing identities, relationships, and individualism online all day long—as much as, if not more than, in the real world.

Millennials Don't Care About Owning Cars, And Car Makers Can't Figure Out Why

Driving numbers are down for younger people and the auto industry hasn't found a way to respond. It's because they don't understand why millennials could possibly not want to drive.
Auto manufacturers today are scratching their heads, trying to figure out why the millennial generation has little-to-no interest in owning a car. What car makers are failing to see is that this generation’s interests and priorities have been redefined in the last two decades, pushing cars to the side while must-have personal technology products take up the fast lane.

It’s no secret the percentage of new vehicles sold to 18- to 34-year-olds has significantly dropped over the past few years. Many argue this is the result of a weak economy, that the idea of making a large car investment and getting into more debt on top of college loans is too daunting for them. But that’s not the "driving" factor, especially considering that owning a smartphone or other mobile device, with its monthly fees of network access, data plan, insurance, and app services, is almost comparable to the monthly payments required when leasing a Honda Civic.

What auto manufacturers, along with much of corporate America are missing here is that the vehicles to freedom and personal identity have changed for this generation. The sooner brands get a grip on this reality the sooner they can make adjustments in how they market to and communicate with this core group, which is essential to their long-term success.


I’m not sure if this should be ‘FOR FUN’ or ‘CREEPY’ or both. Depending on the instructor - it may be easier to find faces that are not bored. In another way this is yet another signal that the ‘survey’ is dead and the future will be real-time day via sociometric badges and other technologies.

Chinese lecturer to use facial-recognition technology to check boredom levels among his students

A Chinese university lecturer is using facial-recognition technology on his students to help determine the level of interest in his classes, a tool he said could be used in wider education.

Science professor Wei Xiaoyong developed the new “face reader” to identify emotions which suggest if students are bored or stimulated.

His technique produces a “curve” for each student showing how much they are either “happy” or “neutral”, and that data can indicate whether they are bored, he said.
“When we correlate that kind of information to the way we teach, and we use a timeline, then you will know where you are actually attracting the students’ attention,” Professor Wei told The Telegraph.

“Then you can ask whether this is a good way to teach that content? Or if this content is OK for the students in that class?”

Thursday, September 8, 2016

Friday Thinking 9 Sept 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
Content
Quotes:

Articles
The Internet is systematically changing who we date


….the whole world is facing a data crunch. Counting everything from astronomical images and journal articles to YouTube videos, the global digital archive will hit an estimated 44 trillion gigabytes (GB) by 2020, a tenfold increase over 2013. By 2040, if everything were stored for instant access in, say, the flash memory chips used in memory sticks, the archive would consume 10–100 times the expected supply of microchip-grade silicon.

How DNA could store all the world’s data



... in recent years, as parents turn to streaming services to keep their kids entertained,that the average child is now spared from over 150 hours of commercials a year.

The math is pretty straightforward, as the average kid spends 1.8 hours a day using a streaming service like Netflix, Amazon Prime, or Hulu, all of which don't have commercials on children's programming. Over the course of the year, that's about 650 hours of streaming consumption. With the average hour of television usually housing over 14 minutes of commercials, it's quick to see how many ads our children are spared from on a yearly basis.

Netflix saves kids from over 150 hours of commercials a year





I’m really excited - probably pre-emptively but this looks like the incumbents will finally get the competition they’ve been deserving for …..well ever! If optical fiber were provided to every home and building as part of public infrastructure - this would not only disrupt current incumbent rent-seekers but provide the platform for the 21st Century economy. However, Google’s plan only works with their two new phones. :(
Despite this - this represents a significant change enabling people to get an international phone and that’s an exciting development.
“To stay connected in places where your cell connection isn’t as strong, you can have your phone or tablet automatically connect to open Wifi networks that we’ve verified as fast and reliable,” says Google. “Wifi Assistant is a free service that makes these connections for you securely.”

Google to offer free Wifi mobile calling in North America and Europe

Google is taking the next step in competing with mobile operators by extending its free Wifi alternative for voice and data “in the next few weeks”.
The service, called Wifi Assistant, will be available to all owners of Google Nexus phones in the US, Canada, Mexico, the UK and Nordic countries, said Google in a statement.

“This will roll out to users over the next few weeks,” said the company. It promises “more than a million free, open Wifi hotspots” that will connect automatically to allow Nexus owners to move their data off mobile networks.

This feature was originally a feature of Project Fi, Google’s US-based mobile service that uses capacity on Sprint, T-Mobile US and US Cellular networks, but also Wifi where available.


This is a must view video about the present, the future and foresight methods.

Jerome C. Glenn on Singularity 1 on 1: Science is an epistemology in the house of philosophy

Jerome C. Glenn is co-founder and Director of The Millennium Project and I had great fun talking to him during our first interview. But it has been over two years since our previous conversation and so, when Jason Ganz reminded me that the latest State of the Future report has been out for several months now, I jumped at the opportunity to have Jerome back on Singularity 1 on 1.

In this second discussion with Glenn we cover a wide variety of topics such as: The State of the Future report; if the world is coming to an end; the definition of war and the conflicts in Palestine, Syria, Iraq and Ukraine; things that changed and things that did not change since the last interview; infectious disease epidemics and the containment thereof; bitcoin and cryptocurrencies; the 15 global challenges and why ethics is one of them; sea/salt water agriculture; the growing rich-poor gap and technological unemployment…


This is a short sort of update on MOOCs worth the read.
By now it is crystal clear that MOOCs cannot be compared to traditional courses. Yes, they may replace and/or supplement existing courses, but they are fundamentally different.

MOOCs and Beyond

By now we know that MOOCs are not the final answer. Higher education will not be saved (or destroyed) by these massive open online courses that splashed into everyone’s consciousness about three years ago. Yes, they provide some fascinating opportunities for expanding access to higher education, for helping us to rethink how teaching and learning works, and for revitalizing the debate about the role of faculty and the power (or futility) of going to college. But most pundits and educators have moved on to the next shiny new fad.
This is a mistake.

For underneath and behind the scenes, much progress continues to be made.* In fact, I would suggest that it is only now – after three frustrating years where expectations were raised way too high and subsequently plummeted way too low – are we starting to see the real opportunities.


Here is a great TED Talk by Don Tapscott about the Blockchain. For anyone who still doesn’t understand the Disruptive power of this distributed ledger technology - this is a must view.

Don Tapscott: How the blockchain is changing money and business

What is the blockchain? If you don't know, you should; if you do, chances are you still need some clarification on how it actually works. Don Tapscott is here to help, demystifying this world-changing, trust-building technology which, he says, represents nothing less than the second generation of the internet and holds the potential to transform money, business, government and society.


The rise of collective intelligence is a lot more than the emergence of connect humans it is also about connecting computational and algorithmic prosthetics.

Rise of the Strategy Machines

While humans may be ahead of computers in the ability to create strategy today, we shouldn’t be complacent about our dominance.
As a society, we are becoming increasingly comfortable with the idea that machines can make decisions and take actions on their own. We already have semi-autonomous vehicles, high-performing manufacturing robots, and automated decision making in insurance underwriting and bank credit. We have machines that can beat humans at virtually any game that can be programmed. Intelligent systems can recommend cancer cures and diabetes treatments. “Robotic process automation” can perform a wide variety of digital tasks.

What we don’t have yet, however, are machines for producing strategy. We still believe that humans are uniquely capable of making “big swing” strategic decisions. For example, we wouldn’t ask a computer to put together a new “mobility strategy” for a car company based on such trends as a decreased interest in driving among teens, the rise of ride-on-demand services like Uber and Lyft, and the likelihood of self-driving cars at some point in the future. We assume that the defined capabilities of algorithms are no match for the uncertainties, high-level issues, and problems that strategy often serves up.

We may be ahead of smart machines in our ability to strategize right now, but we shouldn’t be complacent about our human dominance. First, it’s not as if we humans are really that great at it. We know, for example, that the success rate of M&A deals is no better than a coin toss, and one study suggests that 83% of such deals fail to achieve their original goals. New products routinely bomb in the marketplace, companies expand unsuccessfully into new regions and countries, and myriad other strategic decisions don’t pan out.


Here’s a MUST VIEW 4min peek at the future of learning - of how a self-driving car can become a personal learning environment - even of the future of certain types of tourism.

Field Trip to Mars: Framestore's shared VR experience delivered with Unreal Engine 4

BAFTA, Oscar and Cannes Lions-winning VFX house Framestore recently took a group of school children on a Field Trip to Mars, courtesy of a one-of-a-kind US school bus, Unreal Engine and a brilliantly conceived shared VR experience.


And talking about the future - everyone should recognize - “Tea, Earl Grey, Hot” - Captain Picard ordering his beverage from the replicator. Here’s something that may not be too far away.

When Molecular Nanofactory is realized then a desktop Whiskey Machine will produce spirits at less than 36 cents per bottle

While a lot has been written on the application of molecular nanotechnology to medicine (http://nanomedicine.com/ ), computing ( http://www.imm.org/Reports/rep046.pdf), the environment ( http://www.imm.org/Reports/rep045.pdf), and so forth, very little has been written on the manufacturing of atomically precise food.

But which kind of food? When analyzing or developing a new technology, you start with the simplest case. Unadorned beverages will be technically easier to manufacture than solid foods (e.g., steaks) because they require no specific three-dimensional structure and are essentially just solutions of chemicals dissolved in water. The trick is to know what and how much of each chemical, and to be able to manufacture them quickly, accurately, and cheaply enough to represent a significant advance over current methods. Nanofactories will enable this.

Alcohol is always a fun topic of general public interest, and whiskey is perhaps the most challenging of the fine spirits to analyze and synthesize, so this seemed like a good representative exemplar on which to focus a preliminary study. Robert Freitas has completed this preliminary study. The proposed Whiskey Machine would make a low-cost beverage that tastes as good as it is physically possible for that type of beverage to taste, down to the last atom!

This paper is the first serious scaling study of a nanofactory designed for the manufacture of a specific food product, in this case high-value-per liter alcoholic beverages. The analysis indicates that a 6-kg desktop appliance called the Fine Spirits Synthesizer, aka. the “Whiskey Machine,” consuming 300 Watts of power for all atomically precise mechanosynthesis operations, along with a commercially available 59-kg 900 Watt cryogenic refrigerator, could produce one 750 ml bottle per hour of any fine spirit beverage for which the molecular recipe is precisely known at a manufacturing cost of about $0.36 per bottle, assuming no reduction in the current $0.07/kWh cost for industrial electricity. The appliance’s carbon footprint is a minuscule 0.3 gm CO2 emitted per bottle, more than 1000 times smaller than the 460 gm CO2 per bottle carbon footprint of conventional distillery operations today. The same desktop appliance can intake a tiny physical sample of any fine spirit beverage and produce a complete molecular recipe for that product in ~17 minutes of run time, consuming less 25 Watts of power, at negligible additional cost


The rapidly emerging Internet of Things (especially sensors) may seem like a lot of hype and little motion. Sometime things move slowly and then move very fast. The possibilities of developing whole new senses is also coinciding with understanding the senses we already have. There’s more common sense than we imagine.
In 2003, Hatt and his colleagues showed that olfactory receptors in human sperm were functional and could be activated by an odor molecule, just like the receptors in the nose.
Over the next decade, Hatt’s team and others continued to identify olfactory receptors in a variety of human tissues, including the lungs, liver, skin, heart, and intestines. In fact, they are some of the most highly expressed genes in many tissues. “One can be sure that these receptors must have enormous importance for the cell,” Hatt says.

What Sensory Receptors Do Outside of Sense Organs

Odor, taste, and light receptors are present in many different parts of the body, and they have surprisingly diverse functions.
The light, odor, and taste receptors located in our eyes, noses, and tongues flood our brain with information about the world around us. But these same sensory receptors are also present in unexpected places around the body, where they serve a surprising range of biological roles. In the last decade or so, researchers have found that the gut “tastes” parasites before initiating immune responses, and the kidneys “smell” fatty acids, regulating blood pressure in response.

In contrast to the early days of the field, the idea of sensory receptors outside of sensory organs is no longer unusual. “They’re all just chemoreceptors, and you can use them in lots of different contexts in physiologically different systems,” says University of Colorado Denver neurobiologist Thomas Finger.

Now researchers are characterizing such sense receptors present in different tissues around the body and working to understand their functions, with the eventual goal of using these receptors for various diagnostic or therapeutic applications. Preliminary trials are underway to test therapeutic uses of light-induced vasodilation in humans, for example, and clinical trials will soon test whether a patient’s taste receptors—both those in the mouth and those in the airway—could be used to diagnose and to treat respiratory infections, respectively. While many of the details of the receptors’ activation and downstream signaling remain unclear, researchers are finally getting closer to making sense of what these receptors do outside of the classic sense organs. And labs are using modern genetic tools, such as arrays to detect gene expression or protein levels in different tissues, to pin them down.


This is very fascinating - as we domesticate matter - we learn to also ‘hack’ matter and in this way hack into computers. Anyone interested in computer security this is a must read.
…. new attacks use a technique Google researchers first demonstrated last March called “Rowhammer.” The trick works by running a program on the target computer, which repeatedly overwrites a certain row of transistors in its DRAM flash memory, “hammering” it until a rare glitch occurs: Electric charge leaks from the hammered row of transistors into an adjacent row. The leaked charge then causes a certain bit in that adjacent row of the computer’s memory to flip from one to zero or vice versa. That bit flip gives you access to a privileged level of the computer’s operating system.
It’s messy. And mind-bending. And it works.

Forget Software—Now Hackers Are Exploiting Physics

PRACTICALLY EVERY WORD we use to describe a computer is a metaphor. “File,” “window,” even “memory” all stand in for collections of ones and zeros that are themselves representations of an impossibly complex maze of wires, transistors and the electrons moving through them. But when hackers go beyond those abstractions of computer systems and attack their actual underlying physics, the metaphors break.

Over the last year and a half, security researchers have been doing exactly that: honing hacking techniques that break through the metaphor to the actual machine, exploiting the unexpected behavior not of operating systems or applications, but of computing hardware itself—in some cases targeting the actual electricity that comprises bits of data in computer memory. And at the Usenix security conference earlier this month, two teams of researchers presented attacks they developed that bring that new kind of hack closer to becoming a practical threat.


Learning is still a mystery - could it be part of the next Turing Test? Deep learning algorithms remain a sort of black box - but whether we understand or not learning seems to happen.
“Our study uses the Turing test to reveal how a given system – not necessarily a human – works. In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm – made of learning robots – under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators.”

Researchers discover machines can learn by simply observing, without being told what to look for

Researchers working with swarm robots make major breakthrough
It is now possible for machines to learn how natural or artificial systems work by observing them, without being told what to look for
This could lead to advances in how machines infer knowledge and use it for predicting behaviour and detecting abnormalities


Here’s a 27 page Stanford University report on the future of Artificial Intelligence.

ARTIFICIAL INTELLIGENCE AND LIFE IN 2030 ONE HUNDRED YEAR STUDY ON ARTIFICIAL INTELLIGENCE

The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. It considers the science, engineering, and deployment of AI-enabled computing systems. As its core activity, the Standing Committee that oversees the One Hundred Year Study forms a Study Panel every five years to assess the current state of AI. The Study Panel reviews AI’s progress in the years following the immediately prior report, envisions the potential advances that lie ahead, and describes the technical and societal challenges and opportunities these advances raise, including in such arenas as ethics, economics, and the design of systems compatible with human cognition. The overarching purpose of the One Hundred Year Study’s periodic expert review is to provide a collected and connected set of reflections about AI and its influences as the field advances. The studies are expected to develop syntheses and assessments that provide expert-informed guidance for directions in AI research, development, and systems design, as well as programs and policies to help ensure that these systems broadly benefit individuals and society.


And if learning machines isn’t enough - the computational paradigms continue to develop. Advances in quantum capabilities go beyond computer-type concepts but could contribute to renewable energy and artificial photosynthesis.
“They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

Revealed: Google’s plan for quantum computer supremacy

The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate
SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

And New Scientist has learned it could be ready sooner than anyone expected – perhaps even by the end of next year.

The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

The firm’s plans are secretive, and Google declined to comment for this article. But researchers contacted by New Scientist all believe it is on the cusp of a breakthrough, following presentations at conferences and private meetings.


Even China is recognizing that the race to develop AI capability can be won via proprietary approaches - Baidu now joins the ranks of Google, Microsoft, Facebook and other in making their AI software open-source.
“You don’t need to be an expert to quickly apply this to your project,” Xu said in an interview. “You don’t worry about writing math formulas or how to handle data tasks.” (Indeed, the playful doubling of the original code-name is intended to convey that it’s easier to use than rival software.)
In the case of open-sourcing of AI algorithms, and specifically at Baidu, the aim is to draw more deep learning engineers, which are in very high demand today. “ People will recognize Baidu as a leader, so it will attract more talent,” said Xu.

China’s Baidu to open-source its deep learning AI platform

The Chinese Internet giant Baidu Inc. has been making big progress in applying deep learning neural networks to improve image recognition, language translation, search ranking and click prediction in advertising. Now, it’s going to give a lot of it away.

The company, often called “China’s Google,” will announce Thursday at the annual Baidu World conference in Beijing that it’s offering the artificial intelligence software that its own engineers have been using for years as open source. Available in an early version on GitHub with full availability Sept. 30, it’s code-named PaddlePaddle, for PArallel Distributed Deep LEarning.

Deep learning is the branch of machine learning that attempts to emulate the way neurons work in the human brain to find patterns in data representing sounds, images, and other data. Google, Facebook, Microsoft, IBM and other companies have also been making big breakthroughs thanks to the ability to pump massive amounts of data into these artificial neural networks.

The announcement follows the open-sourcing in the last two years of other machine intelligence and deep learning tools such as Torch and machine-vision technology from Facebook, TensorFlow from Google, Computation Network Tool Kit(CNTK) from Microsoft and DSSTNE from Amazon.com, as well as independent open source frameworks such as Caffe. Baidu also has open-sourced other pieces of its AI code. But Xu Wei, the Baidu distinguished scientist who led PaddlePaddle’s development, said this software is intended for broader use even by programmers who aren’t experts in deep learning, which involves painstaking training of software models.


And related to emerging new and powerful computing paradigms - beyond the Moore’s Law is Dead - Long Live Moore’s Law file - here’s a something about the development of biocomputing.

How DNA could store all the world’s data

Modern archiving technology cannot keep up with the growing tsunami of bits. But nature may hold an answer to that problem already.
It was Wednesday 16 February 2011, and Goldman was at a hotel in Hamburg, Germany, talking with some of his fellow bioinformaticists about how they could afford to store the reams of genome sequences and other data the world was throwing at them. He remembers the scientists getting so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives. “We thought, 'What's to stop us using DNA to store information?'”

Then the laughter stopped. “It was a lightbulb moment,” says Goldman, a group leader at the European Bioinformatics Institute (EBI) in Hinxton, UK. True, DNA storage would be pathetically slow compared with the microsecond timescales for reading or writing bits in a silicon memory chip. It would take hours to encode data by synthesizing DNA strings with a specific pattern of bases, and still more hours to recover that information using a sequencing machine. But with DNA, a whole human genome fits into a cell that is invisible to the naked eye. For sheer density of information storage, DNA could be orders of magnitude beyond silicon — perfect for long-term archiving.


Here’s another milestone in the “Moore’s Law is Dead - Long Live Moore’s Law” file. Although this is not yet ready for commercial scale production - that’s next in line.
"This achievement has been a dream of nanotechnology for the last 20 years," says Arnold. "Making carbon nanotube transistors that are better than silicon transistors is a big milestone. This breakthrough in carbon nanotube transistor performance is a critical advance toward exploiting carbon nanotubes in logic, high-speed communications, and other semiconductor electronics technologies."

For first time, carbon nanotube transistors outperform silicon

For decades, scientists have tried to harness the unique properties of carbon nanotubes to create high-performance electronics that are faster or consume less power—resulting in longer battery life, faster wireless communication and faster processing speeds for devices like smartphones and laptops.

But a number of challenges have impeded the development of high-performance transistors made of carbon nanotubes, tiny cylinders made of carbon just one atom thick. Consequently, their performance has lagged far behind semiconductors such as silicon and gallium arsenide used in computer chips and personal electronics.
Now, for the first time, University of Wisconsin-Madison materials engineers have created carbon nanotube transistors that outperform state-of-the-art silicon transistors.

Led by Michael Arnold and Padma Gopalan, UW-Madison professors of materials science and engineering, the team's carbon nanotube transistors achieved current that's 1.9 times higher than silicon transistors. The researchers reported their advance in a paper published Friday (Sept. 2) in the journal Science Advances.


This is a wonderful 6min video highlighting that the domestication of bacteria has a very long history indeed - in fact plants have become masters long before humans domesticated plants.

Bacteria-Taming Plants

To grow, plants need nutrients that they draw from soil, especially nitrogen. Pulse crops (the UN declared 2016 the International Year of Pulses (IYP)) have developed a cunning strategy: they partner up with bacteria able to metabolize nitrogen directly from the atmosphere. Researchers are very interested in this phenomenon, and hope to apply it to develop therapeutic applications or use atmospheric nitrogen in agriculture.


The end game of fossil fuels is becoming recognized by major investor.
"We're calling on governments to kick away these carbon crutches, reveal the true impact to society of fossil fuels and take into account the price we will pay in the future for relying on them,"

'The Mother of All Risks': Insurance Giants Call on G20 to Stop Bankrolling Fossil Fuels

Multinational firms managing $1.2tn in assets declare subsidies for coal, oil, and gas 'simply unsustainable'
Warning that climate change amounts to the "mother of all risks," three of the world's biggest insurance companies this week are demanding that G20 countries stop bankrolling the fossil fuels industry.

Multi-national insurance giants Aviva, Aegon, and Amlin, which together manage $1.2tn in assets, released a statement Tuesday calling on the leaders of the world's biggest economies to commit to ending coal, oil, and gas subsidies within four years.

"Climate change in particular represents the mother of all risks—to business and to society as a whole. And that risk is magnified by the way in which fossil fuel subsidies distort the energy market," said Aviva CEO Mark Wilson. "These subsidies are simply unsustainable."

According to a recent report by the International Monetary Fund (IMF), fossil fuel companies receive an estimated $5.3tn a year in global subsidies—a figure that included, as the IMF put it, the "real costs" associated with damage to the environment and human health that are foisted on populations but not paid by polluters.


And the end game of oil continues to play itself out. And remember once renewable power is installed - energy is near zero marginal cost.
Perhaps the most interesting piece of data to come out in the latest Lawrence Berkeley National Lab reports is the trend in the price of solar power purchase agreements or PPAs. These prices reflect the price paid for long-term contracts for the bulk purchase of solar electricity. The latest data show that the 2015 solar PPA price fell below $50 per megawatt-hour (or 5 cents per kilowatt-hour) in 4 of the 5 regions analyzed. In the power industry, the rule of thumb for the average market price of electricity is about $30 to $40 per megawatt-hour—so solar is poised to match the price of conventional power generation if prices continue to decline.

The Price of Solar Is Declining to Unprecedented Lows

Despite already low costs, the installed price of solar fell by 5 to 12 percent in 2015
Now, the latest data show that the continued decrease in solar prices is unlikely to slow down anytime soon, with total installed prices dropping by 5 percent for rooftop residential systems, and 12 percent for larger utility-scale solar farms. With solar already achieving record-low prices, the cost decline observed in 2015 indicates that the coming years will likely see utility-scale solar become cost competitive with conventional forms of electricity generation.  

A full analysis of the ongoing decline in solar prices can be found in two separate Lawrence Berkeley National Laboratory Reports: Tracking the Sun IX focuses on installed pricing trends in the distributed rooftop solar market while Utility-Scale Solar 2015 focuses on large-scale solar farms that sell bulk power to the grid.

Put together, the reports show that all categories of solar have seen significantly declining costs since 2010. Furthermore, larger solar installations consistently beat out their smaller counterparts when it comes to the installed cost per rated Watt of solar generating capacity (or $/WDC).


And if more proof is needed.
It used to be the case that the outlook for coal in the United States was not a good predictor for coal’s fortunes in the rest of the world. However, we may now have reached a turning point where the competitiveness of wind and solar is a global phenomenon, and, just as in the United States, spells the demise of further coal growth everywhere.

Combined Wind and Solar Reach 7.2% of Total US Electricity in 1H 2016

The transition to renewables, wind and solar power in particular, has typically run ahead of expectations this decade and fresh data from the United States illustrates this phenomenon nicely. In the first half of this year, combined wind and solar provided 140.97 TWh of the 1959.20 TWh generated in the country. At the start of the year, the TerraJoule.us forecast was that combined wind and solar would contribute 6.5%. But in the first six months of the year, the combined share is already at 7.2%.

... it’s hard to believe but just five years ago, coal was holding on to more than a 40% share of US power generation. That share has now fallen to 28% in the 1H of 2016, and will decline further. However, because the great wave of recent coal retirements is slowing down, coal’s share of US electricity generation will retain a firm 20-25% as we head into the end of the decade. Coal growth in the United States has now fully terminated—and that may also be the case globally. The relentless cost declines and capacity factor increases for both wind and solar are now very much a part of coal’s current troubles, and the learning rate of renewables is set to bear down further on coal.


The world’s history has many situations where people have come together to manage and share common-pool resources - as Eleanor Ostrom’s life work has well demonstrated. It is only when people can’t talk together and determine rules and rules about changing the rules - that the myth of the tragedy of the commons takes place. In most situations of real importance people generally find a way to talk.

How water shortages flow into collaboration not war

Water fuels human life everywhere, so shortages should spell disaster. But three new books exploring the complexities of water politics reach surprising conclusions
OK, MARK TWAIN never actually said “whiskey’s for drinking, but water’s for fighting over”. The first authenticated use of that famous aphorism of the American West was actually by a government official in Montana in 1983. But in a world of growing water shortages, it is just too good not to quote. Journalist John Fleck certainly can’t resist it, even if the full title of his book is Water is for Fighting Over… And other myths about water in the West.

Water matters. Across the world, we are running out of the stuff. Not absolutely, but where we want it and when we want it. Everyone expects “water wars”: Amazon offers seven books with that title. But what if the real story is how water shortages promote the politics of cooperation rather than conflict? In their different ways, three new books all make that case: in China, in the American West, and globally.


And on the topic of finding ways to collaborate - this is an interesting study of online dating.

The Internet is systematically changing who we date

There are two conflicting theories about how the Internet might be transforming romance today. On one hand, the Internet could make it easier for people to seek out a partner that's exactly like them. With a much bigger pool of people to choose from online, you have a greater chance of finding that perfect match who has a PhD, loves cats and musical soundtracks, and shares both your religion and love of nachos.

On the other hand, the Internet also expands people’s networks and exposes them to others that they would not necessarily meet in their daily life. Online, people are exposed to those of different races, religions and educational backgrounds they might not normally encounter at work, school, church or through friends. And that could be leading people to find more diverse partners.

More people are meeting their partners online than ever before. So it’s worth asking whether the Internet is transforming romance today.

In a recently published study, Gina Potarca of the University of Lausanne, in Switzerland, finds that the Internet appears to be a force for breaking down some of the social barriers that prevent people from marrying those who are different from them. To some extent, the Internet is leading to more mixing between people of different races, religions and educational backgrounds, the study shows.