Thursday, September 8, 2016

Friday Thinking 9 Sept 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

The Internet is systematically changing who we date

….the whole world is facing a data crunch. Counting everything from astronomical images and journal articles to YouTube videos, the global digital archive will hit an estimated 44 trillion gigabytes (GB) by 2020, a tenfold increase over 2013. By 2040, if everything were stored for instant access in, say, the flash memory chips used in memory sticks, the archive would consume 10–100 times the expected supply of microchip-grade silicon.

How DNA could store all the world’s data

... in recent years, as parents turn to streaming services to keep their kids entertained,that the average child is now spared from over 150 hours of commercials a year.

The math is pretty straightforward, as the average kid spends 1.8 hours a day using a streaming service like Netflix, Amazon Prime, or Hulu, all of which don't have commercials on children's programming. Over the course of the year, that's about 650 hours of streaming consumption. With the average hour of television usually housing over 14 minutes of commercials, it's quick to see how many ads our children are spared from on a yearly basis.

Netflix saves kids from over 150 hours of commercials a year

I’m really excited - probably pre-emptively but this looks like the incumbents will finally get the competition they’ve been deserving for …..well ever! If optical fiber were provided to every home and building as part of public infrastructure - this would not only disrupt current incumbent rent-seekers but provide the platform for the 21st Century economy. However, Google’s plan only works with their two new phones. :(
Despite this - this represents a significant change enabling people to get an international phone and that’s an exciting development.
“To stay connected in places where your cell connection isn’t as strong, you can have your phone or tablet automatically connect to open Wifi networks that we’ve verified as fast and reliable,” says Google. “Wifi Assistant is a free service that makes these connections for you securely.”

Google to offer free Wifi mobile calling in North America and Europe

Google is taking the next step in competing with mobile operators by extending its free Wifi alternative for voice and data “in the next few weeks”.
The service, called Wifi Assistant, will be available to all owners of Google Nexus phones in the US, Canada, Mexico, the UK and Nordic countries, said Google in a statement.

“This will roll out to users over the next few weeks,” said the company. It promises “more than a million free, open Wifi hotspots” that will connect automatically to allow Nexus owners to move their data off mobile networks.

This feature was originally a feature of Project Fi, Google’s US-based mobile service that uses capacity on Sprint, T-Mobile US and US Cellular networks, but also Wifi where available.

This is a must view video about the present, the future and foresight methods.

Jerome C. Glenn on Singularity 1 on 1: Science is an epistemology in the house of philosophy

Jerome C. Glenn is co-founder and Director of The Millennium Project and I had great fun talking to him during our first interview. But it has been over two years since our previous conversation and so, when Jason Ganz reminded me that the latest State of the Future report has been out for several months now, I jumped at the opportunity to have Jerome back on Singularity 1 on 1.

In this second discussion with Glenn we cover a wide variety of topics such as: The State of the Future report; if the world is coming to an end; the definition of war and the conflicts in Palestine, Syria, Iraq and Ukraine; things that changed and things that did not change since the last interview; infectious disease epidemics and the containment thereof; bitcoin and cryptocurrencies; the 15 global challenges and why ethics is one of them; sea/salt water agriculture; the growing rich-poor gap and technological unemployment…

This is a short sort of update on MOOCs worth the read.
By now it is crystal clear that MOOCs cannot be compared to traditional courses. Yes, they may replace and/or supplement existing courses, but they are fundamentally different.

MOOCs and Beyond

By now we know that MOOCs are not the final answer. Higher education will not be saved (or destroyed) by these massive open online courses that splashed into everyone’s consciousness about three years ago. Yes, they provide some fascinating opportunities for expanding access to higher education, for helping us to rethink how teaching and learning works, and for revitalizing the debate about the role of faculty and the power (or futility) of going to college. But most pundits and educators have moved on to the next shiny new fad.
This is a mistake.

For underneath and behind the scenes, much progress continues to be made.* In fact, I would suggest that it is only now – after three frustrating years where expectations were raised way too high and subsequently plummeted way too low – are we starting to see the real opportunities.

Here is a great TED Talk by Don Tapscott about the Blockchain. For anyone who still doesn’t understand the Disruptive power of this distributed ledger technology - this is a must view.

Don Tapscott: How the blockchain is changing money and business

What is the blockchain? If you don't know, you should; if you do, chances are you still need some clarification on how it actually works. Don Tapscott is here to help, demystifying this world-changing, trust-building technology which, he says, represents nothing less than the second generation of the internet and holds the potential to transform money, business, government and society.

The rise of collective intelligence is a lot more than the emergence of connect humans it is also about connecting computational and algorithmic prosthetics.

Rise of the Strategy Machines

While humans may be ahead of computers in the ability to create strategy today, we shouldn’t be complacent about our dominance.
As a society, we are becoming increasingly comfortable with the idea that machines can make decisions and take actions on their own. We already have semi-autonomous vehicles, high-performing manufacturing robots, and automated decision making in insurance underwriting and bank credit. We have machines that can beat humans at virtually any game that can be programmed. Intelligent systems can recommend cancer cures and diabetes treatments. “Robotic process automation” can perform a wide variety of digital tasks.

What we don’t have yet, however, are machines for producing strategy. We still believe that humans are uniquely capable of making “big swing” strategic decisions. For example, we wouldn’t ask a computer to put together a new “mobility strategy” for a car company based on such trends as a decreased interest in driving among teens, the rise of ride-on-demand services like Uber and Lyft, and the likelihood of self-driving cars at some point in the future. We assume that the defined capabilities of algorithms are no match for the uncertainties, high-level issues, and problems that strategy often serves up.

We may be ahead of smart machines in our ability to strategize right now, but we shouldn’t be complacent about our human dominance. First, it’s not as if we humans are really that great at it. We know, for example, that the success rate of M&A deals is no better than a coin toss, and one study suggests that 83% of such deals fail to achieve their original goals. New products routinely bomb in the marketplace, companies expand unsuccessfully into new regions and countries, and myriad other strategic decisions don’t pan out.

Here’s a MUST VIEW 4min peek at the future of learning - of how a self-driving car can become a personal learning environment - even of the future of certain types of tourism.

Field Trip to Mars: Framestore's shared VR experience delivered with Unreal Engine 4

BAFTA, Oscar and Cannes Lions-winning VFX house Framestore recently took a group of school children on a Field Trip to Mars, courtesy of a one-of-a-kind US school bus, Unreal Engine and a brilliantly conceived shared VR experience.

And talking about the future - everyone should recognize - “Tea, Earl Grey, Hot” - Captain Picard ordering his beverage from the replicator. Here’s something that may not be too far away.

When Molecular Nanofactory is realized then a desktop Whiskey Machine will produce spirits at less than 36 cents per bottle

While a lot has been written on the application of molecular nanotechnology to medicine ( ), computing (, the environment (, and so forth, very little has been written on the manufacturing of atomically precise food.

But which kind of food? When analyzing or developing a new technology, you start with the simplest case. Unadorned beverages will be technically easier to manufacture than solid foods (e.g., steaks) because they require no specific three-dimensional structure and are essentially just solutions of chemicals dissolved in water. The trick is to know what and how much of each chemical, and to be able to manufacture them quickly, accurately, and cheaply enough to represent a significant advance over current methods. Nanofactories will enable this.

Alcohol is always a fun topic of general public interest, and whiskey is perhaps the most challenging of the fine spirits to analyze and synthesize, so this seemed like a good representative exemplar on which to focus a preliminary study. Robert Freitas has completed this preliminary study. The proposed Whiskey Machine would make a low-cost beverage that tastes as good as it is physically possible for that type of beverage to taste, down to the last atom!

This paper is the first serious scaling study of a nanofactory designed for the manufacture of a specific food product, in this case high-value-per liter alcoholic beverages. The analysis indicates that a 6-kg desktop appliance called the Fine Spirits Synthesizer, aka. the “Whiskey Machine,” consuming 300 Watts of power for all atomically precise mechanosynthesis operations, along with a commercially available 59-kg 900 Watt cryogenic refrigerator, could produce one 750 ml bottle per hour of any fine spirit beverage for which the molecular recipe is precisely known at a manufacturing cost of about $0.36 per bottle, assuming no reduction in the current $0.07/kWh cost for industrial electricity. The appliance’s carbon footprint is a minuscule 0.3 gm CO2 emitted per bottle, more than 1000 times smaller than the 460 gm CO2 per bottle carbon footprint of conventional distillery operations today. The same desktop appliance can intake a tiny physical sample of any fine spirit beverage and produce a complete molecular recipe for that product in ~17 minutes of run time, consuming less 25 Watts of power, at negligible additional cost

The rapidly emerging Internet of Things (especially sensors) may seem like a lot of hype and little motion. Sometime things move slowly and then move very fast. The possibilities of developing whole new senses is also coinciding with understanding the senses we already have. There’s more common sense than we imagine.
In 2003, Hatt and his colleagues showed that olfactory receptors in human sperm were functional and could be activated by an odor molecule, just like the receptors in the nose.
Over the next decade, Hatt’s team and others continued to identify olfactory receptors in a variety of human tissues, including the lungs, liver, skin, heart, and intestines. In fact, they are some of the most highly expressed genes in many tissues. “One can be sure that these receptors must have enormous importance for the cell,” Hatt says.

What Sensory Receptors Do Outside of Sense Organs

Odor, taste, and light receptors are present in many different parts of the body, and they have surprisingly diverse functions.
The light, odor, and taste receptors located in our eyes, noses, and tongues flood our brain with information about the world around us. But these same sensory receptors are also present in unexpected places around the body, where they serve a surprising range of biological roles. In the last decade or so, researchers have found that the gut “tastes” parasites before initiating immune responses, and the kidneys “smell” fatty acids, regulating blood pressure in response.

In contrast to the early days of the field, the idea of sensory receptors outside of sensory organs is no longer unusual. “They’re all just chemoreceptors, and you can use them in lots of different contexts in physiologically different systems,” says University of Colorado Denver neurobiologist Thomas Finger.

Now researchers are characterizing such sense receptors present in different tissues around the body and working to understand their functions, with the eventual goal of using these receptors for various diagnostic or therapeutic applications. Preliminary trials are underway to test therapeutic uses of light-induced vasodilation in humans, for example, and clinical trials will soon test whether a patient’s taste receptors—both those in the mouth and those in the airway—could be used to diagnose and to treat respiratory infections, respectively. While many of the details of the receptors’ activation and downstream signaling remain unclear, researchers are finally getting closer to making sense of what these receptors do outside of the classic sense organs. And labs are using modern genetic tools, such as arrays to detect gene expression or protein levels in different tissues, to pin them down.

This is very fascinating - as we domesticate matter - we learn to also ‘hack’ matter and in this way hack into computers. Anyone interested in computer security this is a must read.
…. new attacks use a technique Google researchers first demonstrated last March called “Rowhammer.” The trick works by running a program on the target computer, which repeatedly overwrites a certain row of transistors in its DRAM flash memory, “hammering” it until a rare glitch occurs: Electric charge leaks from the hammered row of transistors into an adjacent row. The leaked charge then causes a certain bit in that adjacent row of the computer’s memory to flip from one to zero or vice versa. That bit flip gives you access to a privileged level of the computer’s operating system.
It’s messy. And mind-bending. And it works.

Forget Software—Now Hackers Are Exploiting Physics

PRACTICALLY EVERY WORD we use to describe a computer is a metaphor. “File,” “window,” even “memory” all stand in for collections of ones and zeros that are themselves representations of an impossibly complex maze of wires, transistors and the electrons moving through them. But when hackers go beyond those abstractions of computer systems and attack their actual underlying physics, the metaphors break.

Over the last year and a half, security researchers have been doing exactly that: honing hacking techniques that break through the metaphor to the actual machine, exploiting the unexpected behavior not of operating systems or applications, but of computing hardware itself—in some cases targeting the actual electricity that comprises bits of data in computer memory. And at the Usenix security conference earlier this month, two teams of researchers presented attacks they developed that bring that new kind of hack closer to becoming a practical threat.

Learning is still a mystery - could it be part of the next Turing Test? Deep learning algorithms remain a sort of black box - but whether we understand or not learning seems to happen.
“Our study uses the Turing test to reveal how a given system – not necessarily a human – works. In our case, we put a swarm of robots under surveillance and wanted to find out which rules caused their movements. To do so, we put a second swarm – made of learning robots – under surveillance too. The movements of all the robots were recorded, and the motion data shown to interrogators.”

Researchers discover machines can learn by simply observing, without being told what to look for

Researchers working with swarm robots make major breakthrough
It is now possible for machines to learn how natural or artificial systems work by observing them, without being told what to look for
This could lead to advances in how machines infer knowledge and use it for predicting behaviour and detecting abnormalities

Here’s a 27 page Stanford University report on the future of Artificial Intelligence.


The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. It considers the science, engineering, and deployment of AI-enabled computing systems. As its core activity, the Standing Committee that oversees the One Hundred Year Study forms a Study Panel every five years to assess the current state of AI. The Study Panel reviews AI’s progress in the years following the immediately prior report, envisions the potential advances that lie ahead, and describes the technical and societal challenges and opportunities these advances raise, including in such arenas as ethics, economics, and the design of systems compatible with human cognition. The overarching purpose of the One Hundred Year Study’s periodic expert review is to provide a collected and connected set of reflections about AI and its influences as the field advances. The studies are expected to develop syntheses and assessments that provide expert-informed guidance for directions in AI research, development, and systems design, as well as programs and policies to help ensure that these systems broadly benefit individuals and society.

And if learning machines isn’t enough - the computational paradigms continue to develop. Advances in quantum capabilities go beyond computer-type concepts but could contribute to renewable energy and artificial photosynthesis.
“They are definitely the world leaders now, there is no doubt about it,” says Simon Devitt at the RIKEN Center for Emergent Matter Science in Japan. “It’s Google’s to lose. If Google’s not the group that does it, then something has gone wrong.”

Revealed: Google’s plan for quantum computer supremacy

The field of quantum computing is undergoing a rapid shake-up, and engineers at Google have quietly set out a plan to dominate
SOMEWHERE in California, Google is building a device that will usher in a new era for computing. It’s a quantum computer, the largest ever made, designed to prove once and for all that machines exploiting exotic physics can outperform the world’s top supercomputers.

And New Scientist has learned it could be ready sooner than anyone expected – perhaps even by the end of next year.

The quantum computing revolution has been a long time coming. In the 1980s, theorists realised that a computer based on quantum mechanics had the potential to vastly outperform ordinary, or classical, computers at certain tasks. But building one was another matter. Only recently has a quantum computer that can beat a classical one gone from a lab curiosity to something that could actually happen. Google wants to create the first.

The firm’s plans are secretive, and Google declined to comment for this article. But researchers contacted by New Scientist all believe it is on the cusp of a breakthrough, following presentations at conferences and private meetings.

Even China is recognizing that the race to develop AI capability can be won via proprietary approaches - Baidu now joins the ranks of Google, Microsoft, Facebook and other in making their AI software open-source.
“You don’t need to be an expert to quickly apply this to your project,” Xu said in an interview. “You don’t worry about writing math formulas or how to handle data tasks.” (Indeed, the playful doubling of the original code-name is intended to convey that it’s easier to use than rival software.)
In the case of open-sourcing of AI algorithms, and specifically at Baidu, the aim is to draw more deep learning engineers, which are in very high demand today. “ People will recognize Baidu as a leader, so it will attract more talent,” said Xu.

China’s Baidu to open-source its deep learning AI platform

The Chinese Internet giant Baidu Inc. has been making big progress in applying deep learning neural networks to improve image recognition, language translation, search ranking and click prediction in advertising. Now, it’s going to give a lot of it away.

The company, often called “China’s Google,” will announce Thursday at the annual Baidu World conference in Beijing that it’s offering the artificial intelligence software that its own engineers have been using for years as open source. Available in an early version on GitHub with full availability Sept. 30, it’s code-named PaddlePaddle, for PArallel Distributed Deep LEarning.

Deep learning is the branch of machine learning that attempts to emulate the way neurons work in the human brain to find patterns in data representing sounds, images, and other data. Google, Facebook, Microsoft, IBM and other companies have also been making big breakthroughs thanks to the ability to pump massive amounts of data into these artificial neural networks.

The announcement follows the open-sourcing in the last two years of other machine intelligence and deep learning tools such as Torch and machine-vision technology from Facebook, TensorFlow from Google, Computation Network Tool Kit(CNTK) from Microsoft and DSSTNE from, as well as independent open source frameworks such as Caffe. Baidu also has open-sourced other pieces of its AI code. But Xu Wei, the Baidu distinguished scientist who led PaddlePaddle’s development, said this software is intended for broader use even by programmers who aren’t experts in deep learning, which involves painstaking training of software models.

And related to emerging new and powerful computing paradigms - beyond the Moore’s Law is Dead - Long Live Moore’s Law file - here’s a something about the development of biocomputing.

How DNA could store all the world’s data

Modern archiving technology cannot keep up with the growing tsunami of bits. But nature may hold an answer to that problem already.
It was Wednesday 16 February 2011, and Goldman was at a hotel in Hamburg, Germany, talking with some of his fellow bioinformaticists about how they could afford to store the reams of genome sequences and other data the world was throwing at them. He remembers the scientists getting so frustrated by the expense and limitations of conventional computing technology that they started kidding about sci-fi alternatives. “We thought, 'What's to stop us using DNA to store information?'”

Then the laughter stopped. “It was a lightbulb moment,” says Goldman, a group leader at the European Bioinformatics Institute (EBI) in Hinxton, UK. True, DNA storage would be pathetically slow compared with the microsecond timescales for reading or writing bits in a silicon memory chip. It would take hours to encode data by synthesizing DNA strings with a specific pattern of bases, and still more hours to recover that information using a sequencing machine. But with DNA, a whole human genome fits into a cell that is invisible to the naked eye. For sheer density of information storage, DNA could be orders of magnitude beyond silicon — perfect for long-term archiving.

Here’s another milestone in the “Moore’s Law is Dead - Long Live Moore’s Law” file. Although this is not yet ready for commercial scale production - that’s next in line.
"This achievement has been a dream of nanotechnology for the last 20 years," says Arnold. "Making carbon nanotube transistors that are better than silicon transistors is a big milestone. This breakthrough in carbon nanotube transistor performance is a critical advance toward exploiting carbon nanotubes in logic, high-speed communications, and other semiconductor electronics technologies."

For first time, carbon nanotube transistors outperform silicon

For decades, scientists have tried to harness the unique properties of carbon nanotubes to create high-performance electronics that are faster or consume less power—resulting in longer battery life, faster wireless communication and faster processing speeds for devices like smartphones and laptops.

But a number of challenges have impeded the development of high-performance transistors made of carbon nanotubes, tiny cylinders made of carbon just one atom thick. Consequently, their performance has lagged far behind semiconductors such as silicon and gallium arsenide used in computer chips and personal electronics.
Now, for the first time, University of Wisconsin-Madison materials engineers have created carbon nanotube transistors that outperform state-of-the-art silicon transistors.

Led by Michael Arnold and Padma Gopalan, UW-Madison professors of materials science and engineering, the team's carbon nanotube transistors achieved current that's 1.9 times higher than silicon transistors. The researchers reported their advance in a paper published Friday (Sept. 2) in the journal Science Advances.

This is a wonderful 6min video highlighting that the domestication of bacteria has a very long history indeed - in fact plants have become masters long before humans domesticated plants.

Bacteria-Taming Plants

To grow, plants need nutrients that they draw from soil, especially nitrogen. Pulse crops (the UN declared 2016 the International Year of Pulses (IYP)) have developed a cunning strategy: they partner up with bacteria able to metabolize nitrogen directly from the atmosphere. Researchers are very interested in this phenomenon, and hope to apply it to develop therapeutic applications or use atmospheric nitrogen in agriculture.

The end game of fossil fuels is becoming recognized by major investor.
"We're calling on governments to kick away these carbon crutches, reveal the true impact to society of fossil fuels and take into account the price we will pay in the future for relying on them,"

'The Mother of All Risks': Insurance Giants Call on G20 to Stop Bankrolling Fossil Fuels

Multinational firms managing $1.2tn in assets declare subsidies for coal, oil, and gas 'simply unsustainable'
Warning that climate change amounts to the "mother of all risks," three of the world's biggest insurance companies this week are demanding that G20 countries stop bankrolling the fossil fuels industry.

Multi-national insurance giants Aviva, Aegon, and Amlin, which together manage $1.2tn in assets, released a statement Tuesday calling on the leaders of the world's biggest economies to commit to ending coal, oil, and gas subsidies within four years.

"Climate change in particular represents the mother of all risks—to business and to society as a whole. And that risk is magnified by the way in which fossil fuel subsidies distort the energy market," said Aviva CEO Mark Wilson. "These subsidies are simply unsustainable."

According to a recent report by the International Monetary Fund (IMF), fossil fuel companies receive an estimated $5.3tn a year in global subsidies—a figure that included, as the IMF put it, the "real costs" associated with damage to the environment and human health that are foisted on populations but not paid by polluters.

And the end game of oil continues to play itself out. And remember once renewable power is installed - energy is near zero marginal cost.
Perhaps the most interesting piece of data to come out in the latest Lawrence Berkeley National Lab reports is the trend in the price of solar power purchase agreements or PPAs. These prices reflect the price paid for long-term contracts for the bulk purchase of solar electricity. The latest data show that the 2015 solar PPA price fell below $50 per megawatt-hour (or 5 cents per kilowatt-hour) in 4 of the 5 regions analyzed. In the power industry, the rule of thumb for the average market price of electricity is about $30 to $40 per megawatt-hour—so solar is poised to match the price of conventional power generation if prices continue to decline.

The Price of Solar Is Declining to Unprecedented Lows

Despite already low costs, the installed price of solar fell by 5 to 12 percent in 2015
Now, the latest data show that the continued decrease in solar prices is unlikely to slow down anytime soon, with total installed prices dropping by 5 percent for rooftop residential systems, and 12 percent for larger utility-scale solar farms. With solar already achieving record-low prices, the cost decline observed in 2015 indicates that the coming years will likely see utility-scale solar become cost competitive with conventional forms of electricity generation.  

A full analysis of the ongoing decline in solar prices can be found in two separate Lawrence Berkeley National Laboratory Reports: Tracking the Sun IX focuses on installed pricing trends in the distributed rooftop solar market while Utility-Scale Solar 2015 focuses on large-scale solar farms that sell bulk power to the grid.

Put together, the reports show that all categories of solar have seen significantly declining costs since 2010. Furthermore, larger solar installations consistently beat out their smaller counterparts when it comes to the installed cost per rated Watt of solar generating capacity (or $/WDC).

And if more proof is needed.
It used to be the case that the outlook for coal in the United States was not a good predictor for coal’s fortunes in the rest of the world. However, we may now have reached a turning point where the competitiveness of wind and solar is a global phenomenon, and, just as in the United States, spells the demise of further coal growth everywhere.

Combined Wind and Solar Reach 7.2% of Total US Electricity in 1H 2016

The transition to renewables, wind and solar power in particular, has typically run ahead of expectations this decade and fresh data from the United States illustrates this phenomenon nicely. In the first half of this year, combined wind and solar provided 140.97 TWh of the 1959.20 TWh generated in the country. At the start of the year, the forecast was that combined wind and solar would contribute 6.5%. But in the first six months of the year, the combined share is already at 7.2%.

... it’s hard to believe but just five years ago, coal was holding on to more than a 40% share of US power generation. That share has now fallen to 28% in the 1H of 2016, and will decline further. However, because the great wave of recent coal retirements is slowing down, coal’s share of US electricity generation will retain a firm 20-25% as we head into the end of the decade. Coal growth in the United States has now fully terminated—and that may also be the case globally. The relentless cost declines and capacity factor increases for both wind and solar are now very much a part of coal’s current troubles, and the learning rate of renewables is set to bear down further on coal.

The world’s history has many situations where people have come together to manage and share common-pool resources - as Eleanor Ostrom’s life work has well demonstrated. It is only when people can’t talk together and determine rules and rules about changing the rules - that the myth of the tragedy of the commons takes place. In most situations of real importance people generally find a way to talk.

How water shortages flow into collaboration not war

Water fuels human life everywhere, so shortages should spell disaster. But three new books exploring the complexities of water politics reach surprising conclusions
OK, MARK TWAIN never actually said “whiskey’s for drinking, but water’s for fighting over”. The first authenticated use of that famous aphorism of the American West was actually by a government official in Montana in 1983. But in a world of growing water shortages, it is just too good not to quote. Journalist John Fleck certainly can’t resist it, even if the full title of his book is Water is for Fighting Over… And other myths about water in the West.

Water matters. Across the world, we are running out of the stuff. Not absolutely, but where we want it and when we want it. Everyone expects “water wars”: Amazon offers seven books with that title. But what if the real story is how water shortages promote the politics of cooperation rather than conflict? In their different ways, three new books all make that case: in China, in the American West, and globally.

And on the topic of finding ways to collaborate - this is an interesting study of online dating.

The Internet is systematically changing who we date

There are two conflicting theories about how the Internet might be transforming romance today. On one hand, the Internet could make it easier for people to seek out a partner that's exactly like them. With a much bigger pool of people to choose from online, you have a greater chance of finding that perfect match who has a PhD, loves cats and musical soundtracks, and shares both your religion and love of nachos.

On the other hand, the Internet also expands people’s networks and exposes them to others that they would not necessarily meet in their daily life. Online, people are exposed to those of different races, religions and educational backgrounds they might not normally encounter at work, school, church or through friends. And that could be leading people to find more diverse partners.

More people are meeting their partners online than ever before. So it’s worth asking whether the Internet is transforming romance today.

In a recently published study, Gina Potarca of the University of Lausanne, in Switzerland, finds that the Internet appears to be a force for breaking down some of the social barriers that prevent people from marrying those who are different from them. To some extent, the Internet is leading to more mixing between people of different races, religions and educational backgrounds, the study shows.

No comments:

Post a Comment