Thursday, June 1, 2017

Friday Thinking 2 June 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Article:


we’ve begun to realize that as systems became tougher and more difficult to penetrate technically, the bad guys have been turning to the users. The people who use systems tend to have relatively little say in them because they are a dispersed interest. And in the case of modern systems funded by advertising, they’re not even the customer, they’re the product.

When you look at systems like Facebook, all the hints and nudges that the website gives you are towards sharing your data so it can be sold to the advertisers. They’re all towards making you feel that you’re in a much safer and warmer place than you actually are. Under those circumstances, it’s entirely understandable that people end up sharing information in ways that they later regret and which end up being exploited. People learn over time, and you end up with a tussle between Facebook and its users whereby Facebook changes the privacy settings every few years to opt everybody back into advertising, people protest, and they opt out again. This doesn’t seem to have any stable equilibrium.

The Threat - A Conversation With Ross Anderson



 All of the major tech players, companies from other industries and startups whose names we don’t know yet are working away on some or all of the new major building blocks of the future. They are: Artificial intelligence / machine learning, augmented reality, virtual reality, robotics and drones, smart homes, self-driving cars, and digital health / wearables.

All of these things have dependencies in common. They include greater and more distributed computing power, new sensors, better networks, smarter voice and visual recognition, and software that’s simultaneously more intelligent and more secure.

I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought.

Your whole home, office and car will be packed with these waiting computers and sensors. But they won’t be in your way, or perhaps even distinguishable as tech devices.

This is ambient computing, the transformation of the environment all around us with intelligence and capabilities that don’t seem to be there at all.

I expect to see much of this new ambient computing world appear within 10 years, and all of it appear within 20.

if we are really going to turn over our homes, our cars, our health and more to private tech companies, on a scale never imagined, we need much, much stronger standards for security and privacy than now exist. Especially in the U.S., it’s time to stop dancing around the privacy and security issues and pass real, binding laws.

And, if ambient technology is to become as integrated into our lives as previous technological revolutions like wood joists, steel beams and engine blocks, we need to subject it to the digital equivalent of enforceable building codes and auto safety standards. Nothing less will do. And health? The current medical device standards will have to be even tougher, while still allowing for innovation.

The tech industry, which has long styled itself as a disruptor, will need to work hand in hand with government to craft these policies. And that might be a bigger challenge than developing the technology in the first place.

We’ve all had a hell of a ride for the last few decades, no matter when you got on the roller coaster. It’s been exciting, enriching, transformative. But it’s also been about objects and processes. Soon, after a brief slowdown, the roller coaster will be accelerating faster than ever, only this time it’ll be about actual experiences, with much less emphasis on the way those experiences get made.

Mossberg: The Disappearing Computer

Tech was once always in your way. Soon, it will be almost invisible.


 Now, the truth is that knowledge consists of conjectured explanations — guesses about what really is (or really should be, or might be) out there in all those worlds. Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data. And therefore, attempts to work towards creating an AGI that would do the latter are just as doomed as an attempt to bring life to Mars by praying for a Creation event to happen there.

...One implication is that we must stop regarding education (of humans or AGIs alike) as instruction — as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): ‘there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error.’ That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.

How close are we to creating artificial intelligence? – David Deutsch




Global Cyber War I - I believe we are in the midst of the first truly global cyber war. Perhaps it’s been warming up since the 2nd decade of the 21st century - but it’s well underway. Participants include many actors beyond nation states and include criminal organizations, Corporations & Institutions, Oligarchies, non-state and even individuals. In many ways the digital environment - as a general purpose technology providing a platform of near costless coordination - truly enabling self-organization along ‘brands’ and causes.
This is a long read - but outlines some key developments.

Tainted Leaks: Disinformation and Phishing With a Russian Nexus

Key Points
  • Documents stolen from a prominent journalist and critic of the Russian government were manipulated and then released as a “leak” to discredit domestic and foreign critics of the government. We call this technique “tainted leaks.”
  • The operation against the journalist led us to the discovery of a larger phishing operation, with over 200 unique targets spanning 39 countries (including members of 28 governments). The list includes a former Russian Prime Minister, members of cabinets from Europe and Eurasia, ambassadors, high ranking military officers, CEOs of energy companies, and members of civil society.
  • After government targets, the second largest set (21%) are members of civil society including academics, activists, journalists, and representatives of non-governmental organizations.
  • We have no conclusive evidence that links these operations to a particular Russian government agency; however, there is clear overlap between our evidence and that presented by numerous industry and government reports concerning Russian-affiliated threat actors.



Here’s a good signal on the rapid development of Blockchain capability and its alternatives and how the world’s major financial institutions and many other large corporations are taking it very seriously.

The National Bank of Canada Just Joined An Alliance to Develop Ethereum

On Monday, the Enterprise Ethereum Alliance announced 86 new members that will work together to develop business applications on the Ethereum blockchain, including Toyota, Deloitte, Samsung SDS, and the National Bank of Canada.

Ethereum is an alternative to bitcoin, which still dominates the cryptocurrency world. But while bitcoin has become a haven for speculators trying to win big by trading coins, Ethereum's promise is that its blockchain—the public ledger that records all transactions—is chiefly a platform for developing apps, powered by economic incentives. One often-floated use case for blockchains in the financial industry is as a settlement layer to instantly close transactions without middlemen.

The alliance, which was founded in February of this year, is a global foundation with more than 100 members which include financial institutions like JP Morgan, Credit Suisse, and Banco Santander. Its goal is to develop business applications with Ethereum. Membership in the alliance grants organizations the ability to participate in meetings and events, as well as to make contributions to technical documents and white papers.


Here’s a signal about an application of the Blockchain. Another article with clear explanations and a graphic to help understand just what ‘distributed ledger technologies’ are.

Why blockchain should be global trade’s next port of call

This paper examines the suitability of blockchain and blockchain-based distributed ledger technology (DLT) to the port, harbour and terminal industries. DLT has the potential to drastically change the world of asset transfer, asset movements and security of data movement. Testing of various DLT applications has already started – first in 2009 with the emergence of Bitcoin in the financial services industry, then subsequently in various other fields, including within the supply chain.

Anyone working in the port, harbour and terminal industries needs to understand the potential impact and implications of blockchain – in business, in respect to government interactions and along the supply chain. The technology has the potential to change the way parties operate and interact along the value chain as well as to open doors for new players. Some intermediaries might be impacted, others may be left out of the game.


This is an excellent article - a must read for anyone interested in the future of the digital environment - in the complexity and nature of its inevitable high dimensional nature. This introduces two very useful metaphors  (rhizomatic vs arborescent) - which may initially seem more obfuscation than clarifying - but at the end I think what is revealed is that our current dichotomy of Hierarchy versus Network - is more misleading than helpful.

My rhizomatic frankenstack

1/ Consider the difference between an onion and a piece of ginger. The ginger root is the motif for what philosophers call a rhizome. The onion for what they call an arborescence.

2/ With an onion, you can always tell which way is up, and can distinguish horizontal sections apart from vertical sections easily by visual inspection.

2/ With a piece of ginger, there is no clear absolute orientation around an up, and no clear distinction between horizontal and vertical.

3/ According to the linked Wikipedia entry (worth reading), a rhizome "allows for multiple, non-hierarchical entry and exit points in data representation and interpretation."

4/ If you tend to use the cliched "hierarchies versus networks" metaphor for talking about old versus new IT, you would do well to shift to the rhizomatic/arborescent distinction.

5/ Both onions and ginger roots show some signs of both hierarchical structure and network-like internal connections.

6/ The difference is that one has no default orientation, direction of change, or defining internal symmetries. Rhizomes are disorderly and messy in architectural terms.

7/ The diagram above shows a partial view of my personal frankenstack: a mess sprawling over wordpress, slack, mailchimp and dozens of other technology platforms.

8/ As a free agent solopreneur with a weird mix of activities, my frankenstack is probably more complex than most, but not as complex as some power-users I know.

9/ If you work in a large organization defined by an enterprise IT system, your frankenstack is likely more arborescent than mine. More onion-like.

10/ But this is not going to last much longer. Already, bleeding edge enterprise IT platform architecture is acquiring the rhizomatic characteristics of the consumer web.

11/ Why is the rhizome a better mental model for IT infrastructure than either hierarchies or networks? The answer has to do with the curse of dimensionality.

12/ All of us today live informationally high-dimensional lives. We manage many complex information stocks and flows that merge and mix in a labyrinthine permissions/security matrix.

13/ Hierarchies and networks are both clean, legible architectural patterns. Applying them to high dimensional situations is highly burdensome and largely useless…...


This isn’t the singularity - but it a signal on the way. Sort of a genetic algorhythms for machine learning algorhythms

Google’s New AI Is Better at Creating AI Than the Company’s Engineers

At its I/O '17 conference this week, Google shared details of its AutoML project, an artificial intelligence that can assist in the creation of other AIs. By automating some of the complicated process, AutoML could make machine learning more accessible to non-experts.

The AutoML project focuses on deep learning, a technique that involves passing data through layers of neural networks. Creating these layers is complicated, so Google’s idea was to create AI that could do it for them.

“In our approach (which we call ‘AutoML’), a controller neural net can propose a ‘child’ model architecture, which can then be trained and evaluated for quality on a particular task,” the company explains on the Google Research Blog. “That feedback is then used to inform the controller how to improve its proposals for the next round. We repeat this process thousands of times — generating new architectures, testing them, and giving that feedback to the controller to learn from.”

So far, they have used the AutoML tech to design networks for image and speech recognition tasks. In the former, the system matched Google’s experts. In the latter, it exceeded them, designing better architectures than the humans were able to create.


This is a long read - 31 minutes - however it is an extraordinarily clear account via metaphor of how computers store memory. This may sound (as the title suggests) Geeky to the extreme. However, understanding this has very practical implications for all aspects of our lives that involve keeping a ‘record’ (law, policy, code, etc.) up to date as well as not forgetting the past lessons. What is beautiful about this article is that the ‘ungeeky’ reader can begin to see how the conceiving of the Blockchain became an inevitable solution to the problem of memory and dynamic autonomy to make changes in records.

How Your Data is Stored, or, The Laws of the Imaginary Greeks

If you don’t work in computers, you probably haven’t spent much time thinking about how data gets stored on computers or in the cloud. I’m not talking about the physical ways that hard disks or memory chips work, but about something that’s both a lot more complex and a lot more understandable than you might think: if you have a piece of data that many people want to read and edit at once, like a shared text file, a bank’s records, or the world in a multiplayer game, how does everyone agree on what’s in the document, and make sure that nobody overwrites someone else’s work? This is the problem of “distributed consensus,” and in order to discuss it, we’ll have to discuss bad burritos, sheep-tyrants, and the imaginary islands of ancient Greece.

It turns out that the original scientific paper about one of the most important methods used to solve this problem was written in the form of a lengthy discussion of how the part-time parliament of the imaginary ancient Greek island of Paxos managed to pass laws, despite nobody ever reliably showing up at the legislature. It was a wonderful metaphor for how a bunch of people can agree on what to write to a file, even though they might be unreachable or distant for a time — and the paper is both one of the funniest serious research papers ever published, and one of the best explanations of a complicated algorithm I’ve ever seen. And since this metaphor worked so well to explain one part of this problem, and because it’s a lot more fun than talking about file systems, I’m going to use it to explain all of it to you.


An interesting signal about the accelerating shift in energy geopolitics. The most fundamental aspect everyone should hold in mind - is that renewables promise an essentially ‘zero marginal cost’ energy. Even if natural gas is very cheap - you have to keep buying gas - once you have renewable infrastructure energy is essentially free.

China continues green push with world’s largest floating solar farm

The 40-megawatt solar facility in China’s Anhui province is built on flooded coal mines
China has connected a vast, floating photovoltaic (PV) farm to the local power grid, flicking the switch on the largest facility of its kind in the world.

The 40-megawatt facility is located in the city of Huainan, in Anhui province – an area known for its fossil fuel mining industry. In a neat twist, the floating solar farm is based in a flooded former coal mining area, turned into a lake after heavy rain and ground subsidence.

China is spearheading the adoption of solar power. The country’s solar power output increased by 80% over the first three months of 2017, managing 21.4 billion kilowatt-hours over the year’s first quarter. As well as the floating solar farms, China is also home to the world’s largest land-based solar plant, covering 27-square-kilometres in Qinghai province.

The decrease in cost for solar technology has much to do with the sudden growth of farms like these, but there’s also a sense of China seeking to establish itself as a green superpower. It is still facing many issues with carbon emissions and the breathing health of its urban populace, but projects like the one in Huainan are an encouraging sign that China could build its infrastructure around renewable energy.


Oh the irony. Here’s a educational policy that all carbon dependent countries should pursue.

A Chinese company is offering free training for US coal miners to become wind farmers

If you want to truly understand what’s happening in the energy industry, the best thing to do is to travel deep into the heart of American coal country, to Carbon County, Wyoming (yes, that’s a real place).

The state produces most coal in the US, and Carbon County has long been known (and was named) for its extensive coal deposits. But the state’s mines have been shuttering over the past few years, causing hundreds of people to lose their jobs in 2016 alone. Now, these coal miners are finding hope, offered from an unlikely place: a Chinese wind-turbine maker wants to retrain these American workers to become wind-farm technicians. It’s the perfect metaphor for the massive shift happening in the global energy markets.

The news comes from an energy conference in Wyoming, where the American arm of Goldwind, a Chinese wind-turbine manufacturer, announced the free training program. More than a century ago, Carbon County was home to the first coal mine in Wyoming. Soon, it will be the site of a new wind farm with hundreds of Goldwind-supplied turbines.


India is the other key nation to watch - as it struggles to become a fully developing nation by providing the energy necessary to its citizens and industries.
India is attempting to do something no nation has ever done: build a modern industrialized economy, and bring light and power to its entire population, without dramatically increasing carbon emissions. Simply to keep up with rising demand for electricity, it must add around 15 gigawatts each year over the next 30 years.

Why India Keeps Making Grand Claims About Its Energy Future

India appears to be embracing coal and moving away from coal all at once—what’s going on?
Lately we’ve had a mixed bag in terms of India’s energy outlook. On the one hand, the country appears to be a terrific environment for green energy investment, and has made bold predictions about its plans for cutting emissions. On the other, its power minister, Piyush Goyal, recently said that India shouldn’t feel obliged to stop burning coal and that “it’s America and the western world that has to first stop polluting.”

Now, India is canceling close to 14 gigawatts’ worth of new coal-fired power plants, and warning that existing plants totaling another 8.6 gigawatts in generation capacity could soon be too expensive to keep running. What’s going on?

... if newly inked deals for some huge solar facilities are any indication, solar power is plummeting faster in cost than anyone thought possible. A blog post last week by the Institute for Energy Economics and Financial Analysis said that energy prices agreed to for new solar plants had fallen nearly 50 percent in the last year, to under 4 cents per kilowatt-hour.

As the IEEFA post acknowledges, coal is likely to remain a big part of India’s energy mixture for at least the next two decades. But things are changing fast, and those changes are heavily favoring renewables. In December, India released its 10-year national electricity plan, which included the goal of lowering coal and natural gas to a combined 43 percent of the country’s energy capacity by 2027, with a total of 275 gigawatts of installed renewables in the same time frame.


And here’s another signal of this change in global energy geopolitics.

These "Printed" Solar Panels Were Made for Energy-Starved Regions

These panels are in the final testing phase before being released to the public.
On Monday, a team of researchers in Australia unrolled a field of the first solar panels printed using a standard printer, solar ink, and thin, laminated plastic. And we mean literally unrolled: These solar panels are flexible enough to be rolled into a tube.

The panels are developed by Paul Dastoor and his team at the University of Newcastle, who have also created the solar ink and printing method over the last five years. The demonstration of the rollable solar panels covers an area of about 1,000 square feet, which Dastoor says is one of the largest solar tech demonstrations in the world in the announcement. Although the panels are not yet commercially available, this is one of the final tests before the rollable panels are released. And the $1 per square foot cost makes up for the fact they they’re not as pretty as a Tesla solar roof, particularly for the military.

To make the panels, Dastoor’s team puts a water-based electric ink into a printer cartridge and prints out the panels onto thin sheets of plastic. The solar ink is very sensitive and uses semiconducting molecules within the ink to catch energy from the sun and transfer it through the cell to a battery. Dastoor says he expects a commercial printer to be able to make kilometers worth of printed panels in a single day.


This is a great signal about the possible changes in both distribution paradigm (distributed versus centralized) and production of energy. Power sharing as a new form of building social fabric (pun intended).
Also a very significant signal is that the distribution of power (electrical) is accounted for by the use of a Blockchain technology.
LO3’s flagship product, the TransActive Grid meter, works like a digital ledger, keeping track of who buys energy and how much they consume. It allows people to purchase electricity directly from their neighbor with the solar array. Consumers use an app on their phone to interact with the meter. The meter uses a blockchain, the technological breakthrough behind BitCoin, to validate these purchases.

In Brooklyn, you can now sell solar power to your neighbors

In Brooklyn, you can buy honey collected from an urban bee hive. You can buy lettuce grown atop an old bowling alley.
And now, you can purchase free range, gluten-free, fresh, organic solar power right off your neighbor’s roof.

Brooklyn startup LO3 Energy is revolutionizing the way homeowners buy and sell electricity. They are making it possible to auction rooftop solar power directly to your neighbors, creating a market for homegrown clean energy.

To understand why this is such a big deal, let’s take a look at the way power utilities have historically operated. Traditionally, a centralized utility would sell electricity to numerous homes and businesses. There was one seller and many buyers.

Rooftop solar has disrupted that model. Now, in many parts of the country, you can install solar panels on your roof, generate your own power, and sell the surplus power back to the grid. In this model, both you and the grid buy and sell power.


This is a strong signal -given the rate of implementation of renewable ‘Zero Marginal Cost Energy’ - providing strong arguments for why we are in a phase transition in global geopolitics.

India will sell only electric cars within the next 13 years

Every car sold in India from 2030 will be electric, under new government plans that have delighted environmentalists and dismayed the oil industry.
It’s hoped that by ridding India’s roads of petrol and diesel cars in the years ahead, the country will be able to reduce the harmful levels of air pollution that contribute to a staggering 1.2 million deaths per year.

India’s booming economy has seen it become the world’s third-largest oil importer, shelling out $150 billion annually for the resource – so a switch to electric-powered vehicles would put a sizable dent in demand for oil. It’s been calculated that the revolutionary move would save the country $60 billion in energy costs by 2030, while also reducing running costs for millions of Indian car owners.

... it’s been calculated that the gradual switch to electric vehicles across India would decrease carbon emissions by 37% by 2030.


We are entering a new world of prosthetics - physical and mental - connecting more biologically and interfacing with the brain more directly.

Researchers Connect First Click On Arm Prosthesis to Nerves

...the first patient in the Netherlands received his click-on robotic arm. By means of a new technique, this robotic arm is clicked directly onto the bone. A unique characteristic of this prosthesis is that it can be controlled by the patient’s own thoughts. Worldwide, there are only a handful of patients with such a prosthesis.

In April 2010, Johan Baggerman lost his arm in a serious accident. Seven years later, he is one of the first patients in the world with a click-on robotic arm. In the case of a click-on robotic arm, the arm prosthesis is connected directly to the arm stump. Through an opening in the skin, the patient “clicks” the prosthesis onto a metal rod in the bone. Because the prosthesis connects directly to the skeleton, a prosthesis socket is no longer necessary. This ensures that it does not slip off, avoids skin problems, and makes it very easy to put on and take off. This method has already been applied to the leg for a longer period and is now being applied for the first time in Netherlands to the arm. The main difference with the click-on leg prostheses is that the new arm prosthesis can communicate with the patient’s nerves, allowing the patient to control the prosthesis with their mind.


Here’s another signal of accelerating progress in the merger of AI and robotics.

Meet the Most Nimble-Fingered Robot Yet

A dexterous multi-fingered robot practiced using virtual objects in a simulated world, showing how machine learning and the cloud could revolutionize manual work.
Inside a brightly decorated lab at the University of California, Berkeley, an ordinary-looking robot has developed an exceptional knack for picking up awkward and unusual objects. What’s stunning, though, is that the robot got so good at grasping by working with virtual objects.

The robot learned what kind of grip should work for different items by studying a vast data set of 3-D shapes and suitable grasps. The UC Berkeley researchers fed images to a large deep-learning neural network connected to an off-the-shelf 3-D sensor and a standard robot arm. When a new object is placed in front of it, the robot’s deep-learning system quickly figures out what grasp the arm should use.

The bot is significantly better than anything developed previously. In tests, when it was more than 50 percent confident it could grasp an object, it succeeded in lifting the item and shaking it without dropping the object 98 percent of the time. When the robot was unsure, it would poke the object in order to figure out a better grasp. After doing that it was successful at lifting it 99 percent of the time. This is a significant step up from previous methods, the researchers say.


While this is a weak signal - it is a signal of a major possibility of a phase transition in how we understand ourselves and the world around us - an extension of mind, body and senses.
“If you look at what’s happening with sensors, you’ll see that many different disciplines have to come together. Ubiquitous sensing has so many aspects — chemical, biological, physical, radiological,” he says. “With all this sensing research going on, we need a place to coordinate our synergies.”

The future of sensory technology

MIT.nano hosts its first major research symposium.
We are entering the age of ubiquitous sensing. Smart sensors will soon track our health and wellness, enable autonomous cars, and monitor machines, buildings, and bridges. Massive networks of small, inexpensive sensors will enable large-scale global data collection — impacting the distribution of agriculture and water, environmental monitoring, disaster recovery, disease-outbreak detection and intervention, and the operation of cities. With this change in mind, MIT is creating a singular hub to unite experts as they develop a new generation of sensors, and sensing and measurement technologies.

On May 25-26, SENSE.nano will debut, marking the first “center of excellence” powered by MIT.nano, the 214,000 square-foot research facility taking shape in the heart of MIT campus. The center will empower people in the MIT community, engage industry leaders, and educate the public


Almost a decade after the memristor was first discovered - it looks like they have finally created a chip and system that can use its power. This is another signal of a changing computational paradigm and another item to file under “Moore’s Law is Dead - Long Live Moore’s Law”.
"The tasks we ask of today's computers have grown in complexity," Lu said. "In this 'big data' era, computers require costly, constant and slow communications between their processor and memory to retrieve large amounts data. This makes them large, expensive and power-hungry."

Bioinspired memristor chips that see patterns over pixels

Inspired by how mammals see, a new "memristor" computer circuit prototype at the University of Michigan has the potential to process complex data, such as images and video orders of magnitude, faster and with much less power than today's most advanced systems.

Faster image processing could have big implications for autonomous systems such as self-driving cars, says Wei Lu, U-M professor of electrical engineering and computer science. Lu is lead author of a paper on the work published in the current issue of Nature Nanotechnology ("Sparse coding with memristor networks").

Lu's next-generation computer components use pattern recognition to shortcut the energy-intensive process conventional systems use to dissect images. In this new work, he and his colleagues demonstrate an algorithm that relies on a technique called "sparse coding" to coax their 32-by-32 array of memristors to efficiently analyze and recreate several photos.

Memristors are electrical resistors with memory -- advanced electronic devices that regulate current based on the history of the voltages applied to them. They can store and process data simultaneously, which makes them a lot more efficient than traditional systems. In a conventional computer, logic and memory functions are located at different parts of the circuit.


For Fun … or Not?
In the world of increasing surveillance - the Philip K. Dick story (and film starring Keanu Reeves) “A Scanner Darkly” anticipated this web site’s effort. EVen if you’re not interested - the images are worth the view simply to see how current face recognition software can be ‘fooled’.

Camouflage from face detection.

CV Dazzle explores how fashion can be used as camouflage from face-detection technology, the first step in automated face recognition.
The name is derived from a type of World War I naval camouflage called Dazzle, which used cubist-inspired designs to break apart the visual continuity of a battleship and conceal its orientation and size. Likewise, CV Dazzle uses avant-garde hairstyling and makeup designs to break apart the continuity of a face. Since facial-recognition algorithms rely on the identification and spatial relationship of key facial features, like symmetry and tonal contours, one can block detection by creating an “anti-face”.


Thursday, May 25, 2017

Friday Thinking 26 May 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Contents
Quotes:


Articles:
How quantum superposition could unravel the ‘grandfather paradox’


Cities are machines, the largest things we build. Their airports and seaports digest and expel people and goods, while their roads and rails siphon both through the urban landscape. Their tunnels carry data, power, water, and sewage. Their governing authorities work (one hopes) with deliberateness, imposing coherence on what otherwise could be chaos. It can all hum efficiently—or fail spectacularly. Typically, all of this is constructed over centuries. The Parisian sewer system dates to the 1850s; New York’s first subway line opened in 1904; London got its first central power station in 1891. Avenues follow cow paths; creeks become water tunnels; fiber-optic lines slowly take their place beside electric cables. The lesson of city building is that infrastructure takes forever—the tortoise to technology’s hare.
But Dubai has done it differently. Dubai has built in 50 years what has taken most cities 100.


For the next generation, Dubai’s advantages are more fraught, tied as they are to impending climate catastrophe. Many cities are about to face new extremes of temperature and drought. Dubai already does. Many cities will struggle to find fresh water and clean power. Dubai already does. Viewed in this light, Dubai is a place where the future has arrived early.


Rather than be intimidated by its ­potentially catastrophic challenges, withdrawing from the world and doubling down on outdated technologies, Dubai is accelerating toward it. The plan is simple: Turn the traditional mechanisms of urban life into a platform for confronting the hazards of contemporary society. Then export those innovations. If a city is a machine, Dubai wants to be the most advanced city-­machine the world has ever seen—and it wants to sell its blueprints to everyone. “Dubai is recognizing that climate change is an existential threat to its ability to be a prosperous part of the world,” says David Pomerantz, executive director of the Energy and Policy Institute, a watchdog group.


In this imagined Dubai of the future, the electricity and water authority has blown past today’s supersize desalination plant and opened a bio-desalination plant, grown from the genes of a jellyfish (the “most absorptive natural material”) and a mangrove tree (“one of nature’s best desalinators”). And it sold them too: “We also export jellyfish bio-desalination plants to cities across the world,” the stentorian voice continues. Robots construct buildings from sand. An artificial intelligence selects and grows food in indoor farms. And flying cars pulse through traffic-free streets. It’s all presented with enough science-fiction flair to maintain a sense of humor. But the punchline is serious: “We solved our own problems, and now climate solutions are our greatest export.” At a historical moment when—in the United States, at least—global-warming predictions remain politically ­controversial, it is startling to see Dubai planning its ­economic future around these challenges.


“Because we don’t have, we need to think harder,” Al Gergawi says, tacitly acknowledging that the pieces of the puzzle don’t yet fit together. “We need to think faster, and we need to reinvent every single product.

Oil won't last forever, so Dubai is betting big on science and tech



We are on the cusp of one of the fastest, deepest, most consequential disruptions of transportation in history. By 2030, within 10 years of regulatory approval of autonomous vehicles (AVs), 95% of U.S. passenger miles traveled will be served by on-demand autonomous electric vehicles owned by fleets, not individuals, in a new business model we call “transportas-a-service” (TaaS). The TaaS disruption will have enormous implications across the transportation and oil industries, decimating entire portions of their value chains, causing oil demand and prices to plummet, and destroying trillions of dollars in investor value — but also creating trillions of dollars in new business opportunities, consumer surplus and GDP growth.


The disruption will be driven by economics. Using TaaS, the average American family will save more than $5,600 per year in transportation costs, equivalent to a wage raise of 10%. This will keep an additional $1 trillion per year in Americans’ pockets by 2030, potentially generating the largest infusion of consumer spending in history

Rethinking Transportation 2020-2030

Disruption of Transportation & Collapse of the Internal-Combustion Vehicle and Oil Industries


For the last five years, Apple held on to the title of the world’s most valuable brand. Then this year, the iPhone maker lost the top spot to Google, according to consultancy Brand Finance’s Global 500 rankings.


As Apple’s brand value tumbled 27% to $107.1 billion in 2016, Google’s increased to $109.5 billion. Amazon, with 53% brand value growth, was close behind at $106.4 billion.

The world’s most valuable brands in 2017



But the hard problem is getting databases working together, invisibly, for our benefit, or getting the databases to interact smoothly with processes running on our own laptops.


Those technical problems are usually masked by bureaucracy, but we experience their impact every single day of our lives. It’s the devil’s own job getting two large organizations working together on your behalf, and deep down, that’s a software issue. Perhaps you want your car insurance company to get access to a police report about your car getting broken into. In all probability you will have to get the data out of one database in the form of a handful of printouts, and then mail them to the company yourself: there’s no real connectivity in the systems. You can’t drive the process from your laptop, except by the dumb process of filling in forms. There’s no sense of using real computers to do things, only computers abused as expensive paper simulators. Although in theory information could just flow from one database to another with your permission, in practice the technical costs of connecting databases are huge, and your computer doesn’t store your data so it can do all this work for you. Instead it’s just something you fill in forms on. Why are we under-utilizing all this potential so badly?


The human factors — the mindsets which generate the software — don’t fit together. Each enterprise builds their computer system in their own image, and these images disagree about what is vital and what is incidental, and truth does not flow between them easily.


Over and over again, we go back to paper and metaphors from the age of paper because we cannot get the software right, and the core to that problem is that we managed to network the computers in the 1990s, but we never did figure out how to really network the databases and get them all working together.


Imagine how much WikiPedia would suck by now if it was a start up pushing hard to monetize its user base and make its investors their money back.

Vinay Gupta - Programmable blockchains in context: Ethereum’s future



This is a longish read - but an article by a physicist & psychiatrist about time and the future - just got my curiosity - this is part of a growing cloud of weak signals about fundamentally new understandings of reality arising from development of a wide range of sciences. However, in this interesting account there may be a flaw about the pre-determinability of the state space and the how entities interact with their on environments - such that actions can change the conditions of the next action. Also the assumptions of probability become more tenuous when systems can enact unpredictable, unknowable ‘adjacent possibles’. A key challenge is the interaction of knowledge gained from systems contained in the lab - versus realities with unknowable boundaries.

The mathematics of mind-time

The special trick of consciousness is being able to project action and time into a range of possible futures
I have a confession. As a physicist and psychiatrist, I find it difficult to engage with conversations about consciousness. My biggest gripe is that the philosophers and cognitive scientists who tend to pose the questions often assume that the mind is a thing, whose existence can be identified by the attributes it has or the purposes it fulfils.


But in physics, it’s dangerous to assume that things ‘exist’ in any conventional sense. Instead, the deeper question is: what sorts of processes give rise to the notion (or illusion) that something exists? For example, Isaac Newton explained the physical world in terms of massive bodies that respond to forces. However, with the advent of quantum physics, the real question turned out to be the very nature and meaning of the measurements upon which the notions of mass and force depend – a question that’s still debated today.


As a consequence, I’m compelled to treat consciousness as a process to be understood, not as a thing to be defined. Simply put, my argument is that consciousness is nothing more and nothing less than a natural process such as evolution or the weather. My favourite trick to illustrate the notion of consciousness as a process is to replace the word ‘consciousness’ with ‘evolution’ – and see if the question still makes sense. For example, the question What is consciousness for? becomes What is evolution for? Scientifically speaking, of course, we know that evolution is not for anything. It doesn’t perform a function or have reasons for doing what it does – it’s an unfolding process that can be understood only on its own terms. Since we are all the product of


This is one more signal of an accelerating phase transition in global energy geopolitics.

Gujarat Cancelling 4 Gigawatt Coal Power Plant As India Moves Away From Coal

The government of Indian state Gujarat has cancelled a proposed 4 gigawatt coal power ultra-mega power project due to existing surplus generation capacity and a desire to transition from fossil fuel–based energy sources to renewable power.


Reports from India’s Business Standard earlier this month reported that the government of Gujarat, under Chief Minister Vijay Rupani, has cancelled a proposal for creating a new 4,000 megawatt (MW) ultra mega coal power project that was to be developed by the Gujarat State Electricity Corporation. Specifically, the reasoning given for cancelling the project was the already substantial installed capacity — around 30,000 MW — of old and renewable energy in the state, with the government adding that building a new conventional coal power plant simply did not make sense.


The move falls well in line with moves across India to decrease its reliance upon coal, and further gives lie to claims from Australian politicians that India is in desperate need of more coal.


Only a few weeks ago it was reported that India had installed more renewable energy capacity over the last financial year than it did thermal power capacity, an impressive achievement for a country which is technically an emerging economy, and one with a massive population.


India is primarily focusing on installing massive amounts of solar power, and a report from November last year outlined how India is planning to build 1 terawatt of solar power — which sounds absurd, but given the amount of solar India has already installed, might not seem as insane as at first reading. Further, India-based consultancy Mercom Capital predicts that 10 GW of new solar capacity will be installed in India in 2017 alone.


Further, the sheer number of planned coal plants are also experiencing decline in India. In August of last year, a report published by the Institute for Energy Economics & Financial Analysis (IEEFA) showed that the country was intending to move forward on developing several coal-fired ultra mega power plants, despite the fact that it was unlikely that India actually needed any more capacity.


Fast forward to March of this year and a new report showed that the total number of coal plants globally under development plummeted in 2016, with at least 68 GW of coal construction frozen at over 100 project sites in China and India alone. It appears that a Greenpeace report that 94% of India’s planned coal capacity would be lying idle in 2022 might have got through to some of India’s leaders.

The phase transition in energy geopolitics seems to accelerate every day.

Mersey feat: world's biggest wind turbines go online near Liverpool

UK cements its position as global leader in wind technology as increasing scale drives down costs
The planet’s biggest and most powerful wind turbines have begun generating electricity off the Liverpool coast, cementing Britain’s reputation as a world leader in the technology.


Danish company Dong Energy has just finished installing 32 turbines in Liverpool Bay that are taller than the Gherkin skyscraper, with blades longer than nine London buses. Dong Energy, the windfarm’s developer, believes these machines herald the future for offshore wind power: bigger, better and, most importantly, cheaper.


Each of the 195m-tall turbines in the Burbo Bank extension has more than twice the power capacity of those in the neighbouring Burbo Bank windfarm completed a decade ago. “That shows you something about the scale-up of the industry, the scale-up of the technology,” said Benjamin Sykes, the country manager for Dong Energy UK.


Collectively they now have a capacity of 5.3GW, generating enough electricity to power 4.3m homes. Eight further projects already under construction will add more than half that capacity again.

It is very possible that we are in the beginning of a Cyber Global War I - this one is much more truly Global in that the variety of participants go far beyond Nation States and include a full range of interest groups, criminal organizations and individuals.
Here’s just one recent ‘engagement’ in the war. Of course the recent massive ‘ransomeware’ attacks are another signal of engagements to come.

#MacronLeaks changed political campaigning: Why Macron succeeded and Clinton failed

Last week’s massive hack of the Macron campaign and the sharing of alleged documents using #MacronLeaks on social media gave supporters the chills. Right-wing activists and autonomous bots swarmed Facebook and Twitter with leaked information that was mixed with falsified reports, to build a narrative that Macron was a fraud and hypocrite.


My colleagues at the Oxford Internet Institute and I have conducted an in-depth analysis of the impact of #MacronLeaks. Our research shows that 50 percent of the Twitter content was generated by just three percent of accounts with an average of 1,500 unique tweets per hour and 9,500 retweets of these tweets per hour. We estimate that over 22.8 million Twitter users were exposed to this information every hour on election day.

This is a fascinating study that signal the link between brain functioning and particular languages. What other aspects of cognitive capacity - and reality are affected by particular languages? If you haven’t seen the movie ‘Arrival’ - it worth the view.
"In this sense, we may regard dyslexia in Chinese and English as two different brain disorders," Dr. Tan said, "because completely different brain regions are disrupted. It's very likely that a person who is dyslexic in Chinese would not be dyslexic in English."
The new research suggests .... The schooling required to read English or Chinese may fine-tune neural circuits in distinctive ways.
In ways that ancient scribes never imagined, text has transformed us. Every brain shaped by reading, whether it is schooled in Chinese or English text, measurably differs -- in terms of patterns of energy use and brain structure -- from one that has never mastered the written word, comparative brain-imaging studies show. "There are real differences that emerge because of literacy," Dr. Wolf said.  

How the Brain Learns to Read Can Depend on the Language

For generations, scholars have debated whether language constrains the ways we think. Now, neuroscientists studying reading disorders have begun to wonder whether the actual character of the text itself may shape the brain.


Studies of schoolchildren who read in varying alphabets and characters suggest that those who are dyslexic in one language, say Chinese or English, may not be in another, such as Italian.


Dyslexia, in which the mind scrambles letters or stumbles over text, is twice as prevalent in the U.S., where it affects about 10 million children, as in Italy, where the written word more closely corresponds to its spoken sound. "Dyslexia exists only because we invented reading," said Tufts University cognitive neuroscientist Maryanne Wolf, author of Proust and the Squid: The Story and Science of the Reading Brain.


Among children raised to read and write Chinese, the demands of reading draw on parts of the brain untouched by the English alphabet, new neuroimaging studies reveal. It's the same with dyslexia, psychologist Li Hai Tan at Hong Kong Research University and his colleagues reported last month in the Proceedings of the National Academy of Sciences. The problems occur in areas not involved in reading other alphabets.


Some social psychologists speculate that the brain changes caused by literacy could be involved in cultural differences in memory, attention and visual perception. In January's Psychological Science, MIT researchers reported that European-Americans and students from several East Asian cultures, for example, showed different patterns of brain activation when making snap judgments about visual patterns.


No one knows which came first: habits of thought or the writing system that gave them tangible form. A writing system could be drawn from the archaeology of the mind, perpetuating aspects of mental life conceived at the dawn of civilization.

And it seems the therapeutic use of language in clinical settings can also have significant impact on brain structures.

Study reveals for first time that talking therapy changes the brain's wiring

A new study from King's College London and South London and Maudsley NHS Foundation Trust has shown for the first time that cognitive behaviour therapy (CBT) strengthens specific connections in the brains of people with psychosis, and that these stronger connections are associated with long-term reduction in symptoms and recovery eight years later.


CBT - a specific type of talking therapy - involves people changing the way they think about and respond to their thoughts and experiences. For individuals experiencing psychotic symptoms, common in schizophrenia and a number of other psychiatric disorders, the therapy involves learning to think differently about unusual experiences, such as distressing beliefs that others are out to get them. CBT also involves developing strategies to reduce distress and improve wellbeing.


The findings, published in the journal Translational Psychiatry, follow the same researchers' previous work which showed that people with psychosis who received CBT displayed strengthened connections between key regions of the brain involved in processing social threat accurately.


The new results show for the first time that these changes continue to have an impact years later on people's long-term recovery.

Here is a very significant signal about the profound power of framing not just for structuring reasoning - but for enabling a cognitive capacity.

Framing spatial tasks as social eliminates gender differences

Women underperform on spatial tests when they don't expect to do as well as men, but framing the tests as social tasks eliminates the gender gap in performance, according to new findings published in Psychological Science, a journal of the Association for Psychological Science. The results show that women performed just as well as their male peers when the spatial tests included human-like figures.


"Our research suggests that we may be underestimating the abilities of women in how we measure spatial thinking," says postdoctoral researcher Margaret Tarampi of the University of California, Santa Barbara. "Given findings that entry into and retention in STEM disciplines is affected by our measures of spatial ability, we may be disproportionately limiting the accessibility of these fields to women because of the ways that we are measuring spatial abilities."


Previous work on spatial thinking has provided some evidence that men are, on average, better than women at certain spatial tasks, such as imagining what an object would look like if it were rotated a specific way. But Tarampi and colleagues Nahal Heydari, a former UCSB undergraduate student, and Mary Hegarty, professor of psychological and brain sciences at UCSB, noticed that little research had investigated whether gender differences exist when it comes to spatial perspective-taking. The researchers were intrigued because being able to imagine objects and environments from another perspective is an ability that we use every day, in tasks such as reading maps, giving directions, and playing video games.


Although the existing gender stereotype about spatial ability suggested that men might be better at spatial perspective-taking than women, Tarampi and colleagues noted that the skill could also be thought of as a test of social ability or empathy, which women are typically thought to be better at.

The re-imagining of everything - has to seriously include our urban environment - we need to re-architect this environment in a way that doesn’t privilege-depend on our current concepts of the car as personal transportation environment.

As self-driving cars hit the road, real estate development may take new direction

Planners are anxious about automated vehicles and their potential to reshape development patterns and the urban landscape
The futuristic vision offered by automated vehicles—the freedom to be active during your commute instead of wasting away behind the wheel while stuck in traffic—isn’t quite as utopian a scenario when you run it past cautious and concerned city planners.


Ask Don Elliott, a zoning consultant and director at Clarion Associates in Denver, and he’ll tell you the idea of empty cars congesting city streets and mobile offices zipping around main roads can become downright dystopian.


“I’ve seen the blood run out of people’s faces,” he says when talking about the impact of automated vehicles on transportation, land use, and real estate. “For years, planners have been fighting for a 1 or 2 percent change in transportation mode [getting more people to use transit or bike instead of drive]. With this technology, everything goes out the window. It’s a nightmare.”


The much-hyped transition to autonomous cars, while still years, or even decades, away, according to experts, is an opportunity and challenge that has wide potential to reshape our transportation systems.


But many believe that as city planners, transportation officials, and, eventually, developers start grappling with the changes to come, autonomous vehicles’ potential to reshape real estate, development, and city planning will rival that of the introduction of the automobile. At the American Planning Association’s annual conference earlier this month in New York City, the issue of autonomous vehicles and driverless cars, one admittedly far in the future, was the subject of numerous present-day panels, discussions, and debates.

Here’s a great signal about possible ways to broaden the competition for the delivery of Internet services. Despite the headline being somewhat ‘grammar free’. :)

Google owner Alphabet balloons connect flood-hit Peru

“Tens of thousands” of Peruvians have been getting online using Project Loon, the ambitious connectivity project from Google's parent company, Alphabet.
Project Loon uses tennis court-sized balloons carrying a small box of equipment to beam internet access to a wide area below.


The team told the BBC they had been testing the system in Peru when serious floods hit in January, and so the technology was opened up to people living in three badly-hit cities.


Until now, only small-scale tests of the technology had taken place.
Project Loon is in competition with other attempts to provide internet from the skies, including Facebook’s Aquila project which is being worked on in the UK.

Here’s another signal about continued developments in the computational world - file under ‘Moore’s Law is Dead - Long Live Moore’s Law’. HP discovered the memrister almost a decade ago - it is very possible that they have integrated that technology in their ‘memory driven computer’.

HPE Unveils Computer Built for the Era of Big Data

Prototype from The Machine research project upends 60 years of innovation and demonstrates the potential for Memory-Driven Computing
Hewlett Packard Enterprise (NYSE: HPE) today introduced the world’s largest single-memory computer, the latest milestone in The Machine research project (The Machine). The Machine, which is the largest R&D program in the history of the company, is aimed at delivering a new paradigm called Memory-Driven Computing—an architecture custom-built for the big data era.


The prototype unveiled today contains 160 terabytes (TB) of memory, capable of simultaneously working with the data held in every book in the Library of Congress five times over—or approximately 160 million books. It has never been possible to hold and manipulate whole data sets of this size in a single-memory system, and this is just a glimpse of the immense potential of Memory-Driven Computing


Based on the current prototype, HPE expects the architecture could easily scale to an exabyte-scale single-memory system and, beyond that, to a nearly-limitless pool of memory—4,096 yottabytes. For context, that is 250,000 times the entire digital universe today.


With that amount of memory, it will be possible to simultaneously work with every digital health record of every person on earth; every piece of data from Facebook; every trip of Google’s autonomous vehicles and every data set from space exploration all at the same time—getting to answers and uncovering new opportunities at unprecedented speeds.

This may be old news by now - but it is definitely another signal related to the domestication of DNA and a whole host of other implications.
“Our hope is that one day this ovarian bioprosthesis is really the ovary of the future,” said Teresa Woodruff at Northwestern University in Chicago. “The goal of the project is to be able to restore fertility and endocrine health to young cancer patients who have been sterilised by their cancer treatment.”

3D-printed ovaries allow infertile mice to give birth

The creation of artificial ovaries for humans is a step closer after birth of healthy pups from mice given ‘ovarian bioprosthesis’
Infertile mice have given birth to healthy pups after having their fertility restored with ovary implants made with a 3D printer.


Researchers created the synthetic ovaries by printing porous scaffolds from a gelatin ink and filling them with follicles, the tiny, fluid-holding sacs that contain immature egg cells.


In tests on mice that had one ovary surgically removed, scientists found that the implants hooked up to the blood supply within a week and went on to release eggs naturally through the pores built into the gelatin structures.


The work marks a step towards making artificial ovaries for young women whose reproductive systems have been damaged by cancer treatments, leaving them infertile or with hormone imbalances that require them to take regular hormone-boosting drugs.

We are barely even at the threshold of the era of domesticated DNA - this is an excellent signal of the trajectory of where we are going - whether you agree or not - the 21st century - is a whole new world.

Now That We Can Read Genomes, Can We Write Them?

A group of scientists is pushing ahead with plans to build whole genomes—including human ones—from scratch.
Since the Human Genome Project (HGP) was completed in 2003, scientists have sequenced the full genomes of hundreds, perhaps thousands, of species. Octopuses. Barley. Mosquitoes. Birch trees. Reading genomes is now commonplace, but that’s not enough for the group of scientists who gathered at the New York Genome Center on Tuesday. They want to write entire genomes with the same ease, synthesizing them from scratch and implanting them into hollow cells.


One team already did this for a tiny bacterium in 2010, creating a synthetic cell called Synthia. But the New York group has set its sights on building the considerably larger genomes of plants, animals, and yes—after a lot of future discussion—humans.


For now, that’s technically implausible. You’d have to make millions of short stretches of DNA, assemble them into larger structures, get them into an empty cell, and wrap and fold them correctly. In the process, you’d go bankrupt. Although we can sequence a human genome for less than $1,000, writing all 3 billion letters would still cost around $30 million. Still, even that exorbitant price has fallen from $12 billion in 2003, and should reach $100,000 within the next 20 years. And the group assembled in New York wants to double that pace.


They’re pushing for an international project called Genome Project-write—GP-write—that aims to reduce the costs of building large genomes by 1,000 times within 10 years. “It’s an aggressive goal, but based on what we saw with the HGP—the reading project, if you will—we think we can do this,” said Jef Boeke from New York University School of Medicine. And just as the HGP helped to drive down the cost of DNA-sequencing, the GP-write team hopes that the demand created by their initiative will push down the cost of DNA-writing tech. “I want to see a time in the not-too-distant future when, in elementary schools, it’ll be routine to think: I want to do some DNA synthesis as a project,” said Pamela Silver from Harvard Medical School.

There’s other consequences to ability to ‘read’ a genome - one of which is a new understanding of what a species is - and the more fluid nature of genes within the gene pool.

What Does it Mean to Be a Species? Genetics is Changing the Answer

As DNA techniques let us see animals in finer and finer gradients, the old definition is falling apart
For Charles Darwin, "species" was an undefinable term, "one arbitrarily given for the sake of convenience to a set of individuals closely resembling each other." That hasn't stopped scientists in the 150 years since then from trying, however. When scientists today sit down to study a new form of life, they apply any number of more than 70 definitions of what constitutes a species—and each helps get at a different aspect of what makes organisms distinct.


In a way, this plethora of definitions helps prove Darwin’s point: The idea of a species is ultimately a human construct. With advancing DNA technology, scientists are now able to draw finer and finer lines between what they consider species by looking at the genetic code that defines them. How scientists choose to draw that line depends on whether their subject is an animal or plant; the tools available; and the scientist’s own preference and expertise.


Now, as new species are discovered and old ones thrown out, researchers want to know: How do we define a species today? Let’s look back at the evolution of the concept and how far it’s come.
These advances have also renewed debates about what it means to be a species, as ecologists and conservationists discover that many species that once appeared singular are actually multitudes. Smithsonian entomologist John Burns has used DNA technology to distinguish a number of so-called "cryptic species"—organisms that appear physically identical to a members of a certain species, but have significantly different genomes. In a 2004 study, he was able to determine that a species of tropical butterfly identified in 1775 actually encompassed 10 separate species.

For Fun
For anyone who has thought about the paradox of time travel an inadvertently enacting you own demise - This is a perfect 3 minute Video.

How quantum superposition could unravel the ‘grandfather paradox’

The ‘grandfather paradox’ has long been one of the most popular thought experiments in physics: you travel back in time and murder your grandfather before he’s ever born. If you’ve killed your grandfather, you’ve prevented your own existence, but if you never existed, how could you have committed the murder in the first place? Some physicists have avoided the question by arguing that backwards time travel simply isn’t consistent with the laws of physics, or by asserting a ‘many worlds’ interpretation of the Universe. But could the concept of quantum superposition remove what seems so paradoxical from this tale of time travel and murder once and for all?