Thursday, March 12, 2015

Friday Thinking, 13 March 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

Fred Wilson, a venture capitalist with Union Square Ventures in New York City, has started engaging in online conversations with his peers about community ownership of tech companies. “With more and more web and mobile applications deriving their value mostly or completely from their user base (Facebook, Twitter, eBay, Etsy, Reddit, Kickstarter, Uber, etc), there is a growing sense that the community could or should have some real ownership in these businesses,” he wrote in January.

Some policymakers are also taking note. A $1.2 million investment in workers’ co-ops by the New York City Council announced last year was quickly bettered by a $5 million investment by the mayor of Madison in Wisconsin. “With a co-operative you don’t have to worry about a buyout,” said Mayor Paul Soglin. “You don’t have to worry about a CEO one day picking up and moving the company to Fargo. With a co-operative you can have confidence that the company and the wealth it generates are going to stay local.”

“The main perk for me is, ‘a problem shared is a problem halved,’....

We’re still missing an important component. Science has always been about sharing, about the flow of ideas. For the first few centuries of scientific research, publishing meant paper journals, and those are (by nature) scarce commodities. You can’t publish a terabyte of data in a journal, nor can you publish a long, detailed, and extremely precise description of an experiment. You can’t publish the software you used to analyze the data. When you’re limited to paper, about all that makes sense is to publish a rough description of what you did, some graphs of the data, and the result. As our experiments and analyses get more complex, that’s no longer enough. In addition to collecting much more data and describing the experiments as detailed programs, we need ways to share the data, the experimental processes, and the tools to analyze that data. That sharing goes well beyond what traditional scientific journals provide, though some publications (notably F1000Research and GigaScience) are taking steps in this direction.

To understand what we need to share, we need to look at why we’re sharing. We’re not sharing just because sharing is a good in itself, and it’s what our nursery school teachers encouraged us to do. Sharing is central to the scientific enterprise. How can anyone reproduce a result if they don’t know what you’ve done? How can they check your data analysis without your data or your software? Even more importantly, how can they look at your experimental procedures and improve them? And without sharing, how can you incorporate protocols developed by other researchers? All science builds on other science. For 400 or so years, that building process was based on sharing results: the limitations of print journals and face-to-face meetings made it difficult to do more.

Fortunately, we’re no longer bound by paper-based publishing. Amazon Web Services makes it possible to build huge, publicly accessible data archives at relatively low cost. Many large datasets are becoming available: for example, the Protein Data Bank, and the raw data from the Large Hadron Collider. Figshare is a cloud-based service for managing and publishing scientific datasets. Sharing protocols and the software used for data analysis is a different problem, but it’s also been solved. GitHub is widely used by software engineers, and provides an excellent model for sharing software in ways that allow others to modify it and use it for their own work. When we have languages for describing experiments precisely (Antha is a very promising start), using tools like GitHub to share protocols will become natural.

Once the data and protocols are available, they have to be searchable. We tend to view scientific results as wholes, but any experiment is made up of many small steps. It’s possible that the most useful part of any experiment isn’t the “result” itself, but some intermediate step along the way. How did someone run a particular reaction, and how does that relate to my experiment? I might have no interest in the overall result, but information about some particular part of the experiment and its intermediate results might be all I need to solve an entirely different problem. That’s how we’re going to get beyond lab folklore: by looking into other labs and seeing how they’ve solved their problems. With many scientists running the same reaction in different contexts, collecting and publishing the data from all their intermediate steps, it should be possible to determine why a specific step failed or succeeded. It should be possible to investigate what’s different about my experiment: what changes (intentional or not) have I made to the protocol?

...Helping scientists to analyze their own data is important, but the goals are much bigger: analysis across data sets, large-scale data mining, “permissionless innovation.” This is what the future of science will look like if we’re bold enough to look beyond centuries-old models.
Lucky machines in the lab

Let me give you an example that I'm thinking about a lot today, concerning the future of humankind in the field of medicine. At least to the best of my understanding, we're in the middle of a revolution in medicine. After medicine in the 20th century focused on healing the sick, now it is more and more focused on upgrading the healthy, which is a completely different project. And it's a fundamentally different project in social and political terms, because whereas healing the sick is an egalitarian project ... you assume there is a norm of health, anybody that falls below the norm, you try to give them a push to come back to the norm, upgrading is by definition an elitist project. There is no norm that can be applicable to everybody.

Once you really solve a problem like direct brain-computer interface ... when brains and computers can interact directly, to take just one example, that's it, that's the end of history, that's the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can basically break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, as it's often referred to, by definition, we have no way of even starting to imagine what's happening beyond that.

...In the Industrial Revolution of the 19th century, what humanity basically learned to produce was all kinds of stuff, like textiles and shoes and weapons and vehicles, and this was enough for the very few countries that underwent the revolution to subjugate everybody else. What we're talking about now is like a second Industrial Revolution, but the product this time will not be textiles or machines or vehicles, or even weapons. The product this time will be humans themselves.
We're basically learning to produce bodies and minds. Bodies and minds are going to be the two main products of the next wave of all these changes. Once you know how to produce bodies and brains and minds, cheap labor in Africa or South Asia or wherever, it simply counts for nothing.
Death Is Optional

Only 5% of respondents report that workers in innovation programs feel highly motivated to innovate. More than three of four say their new ideas are poorly reviewed and analyzed. And less than a third of the firms surveyed say they regularly measure or report on innovation.

More than four of five respondents (81%) say their firms do not have the resources needed to fully pursue the innovations and new ideas capable of keeping their companies ahead in the competitive global marketplace.

Deming found that 94% of the problem is the system, 6% the worker. The results show that workers are frustrated and don’t know how to contribute, and believe their efforts aren’t taken as seriously as they should be.
Steve Denning - Why U.S. Firms Are Dying: Failure To Innovate

To be fair, predicting the future is hard. But what if the industry experts here were wrong about the iPhone, not just because of the uncertainty of predictions, but also because they were experts?  What if they were blinded by their own knowledge, so confident in what was already working that they couldn't contemplate the feasibility of something new? In 1997, Clayton Christensen coined the term "the Innovator's Dilemma" to describe the choice companies face between incrementally improving their core business (perfecting old ideas) and embracing emerging markets that could upend their core business (investing in new ideas).

But what if the innovator's dilemma is part of something bigger—a creator's dilemma, an innate bias against novelty?

Indeed, it turns out that our aversion to new ideas touches more than technology companies. It affects entertainment executives deciding between new projects, managers choosing between potential projects or employees, and teachers assessing conformist versus non-conformist children. It is a bias against the new. The brain is hardwired to distrust creativity.

The researchers found that new ideas—those that remixed information in surprising ways—got worse scores from everyone, but they were particularly punished by experts.
This article may recall’s the key point of Kuhn’s ‘Scientific Revolutions’.
The physicist Max Planck put it best: "Science advances one funeral at a time.
Why Experts Reject Creativity
People think they like creativity. But teachers, scientists, and executives are biased against new ways of thinking.
In 2007, Steve Ballmer, then-CEO of Microsoft, emphatically predicted that Apple's new phone would fail. "There's no chance that the iPhone is going to get any significant market share," he said. "No chance."

The volume of Ballmer's voice makes him a popular target in technology, but he wasn't an outlier, just the loudest guy in crowd of skeptical experts. RIM CEO Jim Balsillie said the iPhone would never represent "a sort of sea-change for BlackBerry." Cellphone experts writing in Bloomberg, PC Magazine, and Marketwatch all said it would flop.

No one had seen something like the iPhone before. One large screen? With no keypad? That tries to be everything at once, but actually offers a poor call service, slow Internet speeds, and worse camera quality than your existing devices? The experts were certain: This will not work.

Everybody knows the end of that story. The failure forecasts failed.

Here a great 7 min video about not the danger of the future - but the danger of now. Worth the view.
Michio Kaku: Will Mankind Destroy Itself?
The physicist sees two major trends in the world today: the first is toward a multicultural, scientific, tolerant society; the other, as evidenced by terrorism, is fundamentalist and monocultural. Whichever one wins out will determine the fate of mankind.

Here’s one of the outputs of the recent World Economic Forum.
Top 10 emerging technologies of 2015
Technology is perhaps the greatest agent of change in the modern world. While never without risk, technological breakthroughs promise innovative solutions to the most pressing global challenges of our time. From zero-emission cars fuelled by hydrogen to computer chips modelled on the human brain, this year’s 10 emerging technologies offer a vivid glimpse of the power of innovation to improve lives, transform industries and safeguard our planet.

To compile this list, the World Economic Forum’s Meta-Council on Emerging Technologies, a panel of 18 experts, draws on the collective expertise of the Forum’s communities to identify the most important recent technological trends. By doing so, the Meta-Council aims to raise awareness of their potential and contribute to closing the gaps in investment, regulation and public understanding that so often thwart progress.

The 2015 list is: 1. Fuel cell vehicles; 2. Next-generation robotics; 3. Recyclable thermoset plastics; 4. Precise genetic engineering techniques; 5. Additive manufacturing; 6. Emergent artificial intelligence; 7. Distributed manufacturing; 8. ‘Sense and avoid’ drones; 9. Neuromorphic technology; 10. Digital genome

Speaking of foresight - here’s a 48min interview with Venor Vinge who coined the term ‘The Singularity’. Worth the watch.
Vernor Vinge - Foresight and the Singularity - Interview
Interview with renowned author Vernor Vinge - about the Technological Singularity, ways to think about it, strategic forecasting, future studies and risk. Specific Topics: The Beginnings of the Term: 'Technological Singularity' / The Metaphor Implied by the Singularity / Narratives
How Possibility Shapes the Future / Utopias & Dystopias / Thinking about the Future: What Do We Want?

Famous for his groundbreaking 1993 essay on the idea of the "Singularity," called "The Coming Technological Singularity: How to Survive in the Post-Human Era."

Vinge is an emeritus professor of mathematics at San Diego State University and considered one of the worlds greatest science fiction writers: a five-time winner of the Hugo Award, science fiction's most prestigious honor! Vinge's stories explore themes including deep space, the future, and the singularity, a term he famously coined for the future emergence of a greater-than-human intelligence brought about by the advance of technology.

Here’s an interesting article re-visioning how mobile networks can be built.
STEVE PERLMAN WANTS to turn your apartment into an antenna for his new cellular phone network.

Perlman is a serial Silicon Valley inventor and entrepreneur best known for selling his web TV company to Microsoft for half a billion dollars, and over the last few years, he and his team of engineers have built a contraption that aims to significantly boost the speed of our cellular services. He could license this technology to the big-name wireless carriers, such as AT&T and Verizon, as a way of improving their networks. But that’s not the only option. He can ask you to set it up.

If you install this tiny antenna on your roof, Perlman says, it can receive wireless calls and data not just from your own mobile phone but from mobile phones across the neighborhood. Then it can route these calls and data across your home internet connection towards their ultimate destination. And when it does, he’ll give you a cut of the revenue from this crowdsourced phone network—a network that, thanks to the antenna’s unusual design, could increase wireless speeds several times over.

Talking about the Singularity - here’s a tipping point looming on the geopolitical horizon.
Deutsche Bank: Solar Will Be Dominant Global Electricity Source By 2030
Deutsche Bank says solar market is massive, will generate $5 trillion in revenue by 2030. It describes solar plus storage as next the killer app, and says even in India there will be 25% solar by 2022.

Deutsche Bank has produced another major report that suggests solar will become the dominant electricity source around the world as it beats conventional fuels, generates $5 trillion in revenue over the next 15 years, and displaces large amounts of fossil fuels.

In a detailed, 175-page report, the Deutsche analysts led by Vishal Shah say the market potential for solar is massive. Even now, with 130GW of solar installed, it accounts for just 1 per cent of the 6,000GW, or $2 trillion electricity market (that is an annual figure).

But by 2030, the solar market will increase 10-fold, as more than 100 million customers are added, and solar’s share of the electricity market jumps to 10 per cent. By 2050, it suggests, solar’s share will be 30 per cent of the market, and developing markets will see the greatest growth.

Over the next 5-10 years, we expect new business models to generate a significant amount of economic and shareholder value,” the analysts write in the report. Within three years, the economics of solar will take over from policy drivers ….

Their predictions are underpinned by several observations. The first is that solar is at grid parity in more than half of all countries, and within two years will be at parity in around 80 per cent of countries. And at a cost of just 8c/kWh to 13c/kWh, it is up to 40 per cent below the retail price of electricity in many markets. In some countries, such as Australia, it is less than half the retail price.

At utility scale, parity is also drawing near. Just four years ago, the ratio of coal-based wholesale electricity to solar electricity cost was 7:1. Now, says Deutsche Bank, this ratio is now less than 2:1 and it could likely approach 1:1 over the next 12-18 months. In some markets, it already is cheaper. And in India, that ratio could fall to 1:1 this year, with major ramifications for coal projects such as those in the Galilee Basin.

Here’s an article about India’s solar ambitions.
India’s Ambitious Bid to Become a Solar Power
The Indian government hopes to increase the country’s solar capacity 30-fold by 2020.
India’s Prime Minister, Narendra Modi, made headlines last fall by announcing his ambition to install 100 gigawatts of solar power capacity—over 30 times more than India has now—by 2022. Skeptics noted Modi’s lack of a detailed plan and budget, but some well-capitalized industrial players have apparently caught Modi’s solar fever: at a renewable energy summit called by Modi last month he collected pledges for 166 gigawatts of solar projects.

At the New Delhi summit, renewables giants such as First Solar and SunEdison mixed for the first time with chief ministers from Indian states and top executives of Indian industrial conglomerates such as Adani Enterprises and the National Thermal Power Corporation, India’s largest power generator.

One thing not on the WEF’s list of top ten is the rapidly emerging virtual/augmented mixed-reality technology. Here’s a journal worth following.
Assembled 2015 - Journal of Virtual Worlds Research
Virtual worlds hold a tremendous amount of potential for research, education, and interaction. While the literature available on virtual worlds has increased over the years, there are still unexplored arenas as well as areas that require further conversation and investigation. Some of us are continuing to develop our avatars and hone our skills in virtual worlds, while  others are finding new ways to leverage the openness of these environments via unexplored opportunities within the virtual world.
This Assembled 2015 issue contains selected peer-reviewed articles that start, and in some cases continue, discussions about the vastness and versatility of virtual worlds.

Speaking of virtual worlds this is very interesting 4min video from a longtime ‘hacker’ of the Kinect - everything is still ‘buggy’ but it is good enough to imagine what’s coming.
Watching Myself Building a Molecule
A "recursive" video of sorts, showing a recording of myself watching a previously-made recording of myself building a C-60 Buckminsterfullerene using the Nanotech Construction Kit. Both recordings were done in the 3-Kinect 3D video capture space in IDAV's VR lab, using a head-tracked Oculus Rift DK1 and a tracked input device (both times).

The production of knowledge is increasingly transdisciplinary - new disciplines arising in the learning by doing of new innovations. This is a technical 39 min video - that some might find interesting. The speed of synthetic-bio-robotics is accelerating.
The sweet spot integrating neurobiology, robotics and synthetic biology - Joseph Ayers
The adaptive capabilities of underwater organisms result from layered exteroceptive reflexes responding to gravity, impediment, and hydrodynamic and optical flow. In combination with taxic responses to point sources of sound or chemicals, these reflexes allow reactive autonomy in the most challenging of environments. We are developing a new generation of lobster and lamprey-based robots that operate under control by synaptic networks rather than algorithms. The networks are based on the command neuron, coordinating neuron, central pattern generator architecture, code sensor input as labeled lines and activate shape memory alloy-based artificial muscle through a simple neuromuscular interface. In a separate project, we are developing an electronic nervous system to control the flight of RoboBee. In all systems, the behavioral set results from chaining sequences of exteroceptive reflexes released by sensory feedback from the environment. In parallel we are exploring principles of synthetic biology to develop biohybrid robots and sensors and actuators that can interface to electronic nervous systems. Cyberplasm combines an aVLSI electronic nervous system with engineered cellular sensors and engineered muscle that responds to light generated by oLEDs gated by neuron action potentials. In the ONR MURI we are integrating programmable bacteria with RoboLobster and RoboLamprey to enhance chemosensory capabilities.

Here is an interesting article from DARPA about the future of prosthetics. A 2 min video is included.
DARPA neural interfaced Prosthetic Limbs will allow sense of touch and targets tests in patient homes by 2019
Rehabilitation experts at the University of Pittsburgh School of Medicine hope to one day give people with an arm amputation a prosthetic limb that not only moves like a natural one, but “feels” like it, too. They expect such sensation will improve dexterous control of the device and give users greater intuition about what they are doing with their prosthetic.

With funding from the Defense Advanced Research Projects Agency (DARPA)’s Hand Proprioception and Touch Interfaces (HAPTIX) program, Robert Gaunt, Ph.D., assistant professor, Department of Physical Medicine and Rehabilitation (PM&R), Pitt School of Medicine and a multidisciplinary research team from Pitt, West Virginia University and Ripple LLC will begin developing the technology with the aim of being able to test it in patients’ homes within four years.

“Advanced prosthetic limbs that behave like the hand and arm they are replacing have been an unrealized promise for many years largely because until recently, the technologies to really accomplish this goal simply haven’t been available,” Dr. Gaunt said. “To make the most of these new capabilities, we have to integrate the prosthetic into the remaining neural circuitry so the patient can use it like a regular hand that, for example, can pick up a pen, gently hold an egg or turn a stuck doorknob.”

In the 18-month, first phase of the project, the team will recruit five volunteers to try to demonstrate that stimulation of the sensory portion of the spinal cord nerves, which would normally innervate the hand and forearm, can cause the amputee to feel distinct sensations of touch and joint movement in the “phantom” hand and wrist.

Speaking of new forms of prosthetics here’s a 2 year old 7min video of Kevin Warwick’s experiment with an implant. Well worth the view.
Cyborg The future of the human
Adam Shaw meets Professor Warwick, the first person in the world to have successfully controlled a robot with his own thoughts.

Speaking of the cyborg of the future - here’s something here now.
This Woman Flew an F-35 Simulator with Her Mind
Jan Scheuermann, a quadriplegic and pioneering patient for an experimental Pentagon robotics program, continues to break ground in freeing the mind from the body.

The 55-year-old mother of two in 2012 agreed to let surgeons implant electrodes on her brain to control a robotic arm. More recently, she flew an F-35 Joint Strike Fighter simulator using nothing but her thoughts, an official said.

Arati Prabhakar, director of the Defense Advanced Research Projects Agency, cited the breakthrough last week at the first annual Future of War conference. The event was organized by the New America Foundation, a nonpartisan research group in Washington, D.C.

Scheuermann, who became paralyzed years ago from a rare genetic disease, has tolerated the two pea-sized implants on her left motor cortex “very well,” Prabhakar said, allowing her to extend her participation in the DARPA project.

Speaking about the future of the human-computational interface. This is a fascinating conversation. There is a transcript for reading and the video is 45 min.
Death Is Optional
A Conversation: Yuval Noah Harari, Daniel Kahneman
Once you really solve a problem like direct brain-computer interface ... when brains and computers can interact directly, that's it, that's the end of history, that's the end of biology as we know it. Nobody has a clue what will happen once you solve this. If life can break out of the organic realm into the vastness of the inorganic realm, you cannot even begin to imagine what the consequences will be, because your imagination at present is organic. So if there is a point of Singularity, by definition, we have no way of even starting to imagine what's happening beyond that.

YUVAL NOAH HARARI, Lecturer, Department of History, Hebrew University of Jerusalem, is the author of Sapiens: A Brief History of Humankind.

DANIEL KAHNEMAN is the recipient of the Nobel Prize in Economics, 2002 and the Presidential Medal of Freedom, 2013. He is the Eugene Higgins Professor of Psychology Emeritus, Princeton, and author of Thinking Fast and Slow.

Speaking of advancing maturity - here’s some new research that should give the elders more hope.
New study suggests aging has little impact on brain function
When we get older, communication between neurons slows down and certain regions of the brain see reduced function. At least, that's the current understanding. But a new study by researchers at the University of Cambridge and Medical Research Council's Cognition and Brain Sciences Unit shows that the difference between older brains and younger ones may not be so great. The researchers demonstrated that functional magnetic resonance imaging (fMRI), which is commonly used to study brain activity, is susceptible to signal noise from changing vascular (blood vessel) activity.

fMRI doesn't directly measure neural activity but rather infers it through changes in regional blood flow (because blood flow increases in a brain region whenever that region is in use). Normally that isn't much of an issue, but blood circulation differs between people of different ages and the emerging method favored in accounting for the resultant signal noise by making additional measurements (such as of people holding their breath) is impractical in larger cohort studies of aging (these are comparative studies conducted over a long period that aim to establish links between risk factors and health outcomes).

This is a cute representation of rapidly emerging consumer drone technology - where’s it going to be in another decade? The article is short and the pictures & 3 min video are worth the view. I makes me think of a truly ubiquitous participatory panopticon.
Zano is a palm-sized drone which is automatically steered by your smartphone device
Zano is an ultra-portable, personal aerial photography and HD video capture platform, Small enough to fit in the palm of your hand and intelligent enough to fly all by itself! Zano connects directly to your smart device (iOS or Android)  via onboard WiFi and enables you to instantly begin capturing and sharing moments like never before.

A size of the drone is 2.5 x 2.5 in (6.5 x 6.5 cm), which is one of the smaller nano drones we've come across. A collection of sensors work seamlessly together to allow Zano to avoid obstacles, hold its position and know exactly where it is in conjunction with your smart device, at all times.

Speaking of palm-sized computational capability - here’s the trajectory of wearables.
Smartphones Will Soon Learn to Recognize Faces and More
Smartphone apps will be able to perceive objects and perform other intelligent tricks thanks to Qualcomm’s newest mobile chips.
Smartphone camera apps could soon do more than just capture images. Software integrated with a new line of smartphone chips will be capable of recognizing, say, a cat or city skyline in a photo, and tagging pictures of your friends with their names.

The chip maker Qualcomm announced last week that it will bundle the software with its next major chip for mobile devices. The technology, which Qualcomm calls Zeroth, could make sophisticated machine learning more common on mobile devices. As well as processing images, the Zeroth software is designed to allow phones to recognize speech or other sounds, and to learn to spot patterns of activity from a device’s sensors.

The technology uses an approach to machine learning known as deep learning that has led to recent advances in speech and object recognition, as well as software able to play Atari games with superhuman skill (see “Google’s Intelligence Designer”). Deep learning software is loosely modeled on some features of brains. It can be trained to recognize certain objects in images by processing many example photos through a network of artificial “neurons” arranged into hierarchical layers.

Speaking about facial recognition - here’s an interesting article about how to measure the intelligence of visual AI.
AI Researchers Propose a Machine Vision Turing Test
Researchers have proposed a Visual Turing Test in which computers would answer increasingly complex questions about a scene.
Computers are getting better each year at AI-style tasks, especially those involving vision—identifying a face, say, or telling if a picture contains a certain object. In fact, their progress has been so significant that some researchers now believe the standardized tests used to evaluate these programs have become too easy to pass, and therefore need to be made more demanding.

At issue are the “public data sets” commonly used by vision researchers to benchmark their progress, such as LabelMe at MIT or Labeled Faces in the Wild at the University of Massachusetts, Amherst. The former, for example, contains photographs that have been labeled via crowdsourcing, so that a photo of street scene might have a “car” and a “tree” and a “pedestrian” highlighted and tagged. Success rates have been climbing for computer vision programs that can find these objects, with most of the credit for that improvement going to machine learning techniques such as convolutional networks, often called Deep Learning.

But a group of vision researchers say that simply calling out objects in a photograph, in addition to having become too easy, is simply not very useful; that what computers really need to be able to do is to “understand” what is “happening” in the picture. And so with support from DARPA, Stuart Geman, a professor of applied mathematics at Brown University, and three others have developed a framework for a standardized test that could evaluate the accuracy of a new generation of more ambitious computer vision programs.

Here is an interesting article on the changing practice of science - well worth the read for anyone interested in what ‘knowledge management’ implies for understanding what ‘knowledge’ is.
Lucky machines in the lab
Beyond lab folklore and mythology
The more people I talked to, the more stories I heard: labs where the experimental protocols weren’t written down, but were handed down from mentor to student. Labs where there was a shared common knowledge of how to do things, but where that shared culture never made it outside, not even to the lab down the hall. There’s no need to write it down or publish stuff that’s “obvious” or that “everyone knows.” As someone who is more familiar with literature than with biology labs, this behavior was immediately recognizable: we’re in the land of mythology, not science. Each lab has its own ritualized behavior that “works.” Whether it’s protocols, lucky machines, or common knowledge that’s picked up by every student in the lab (but which might not be the same from lab to lab), the process of doing science is an odd mixture of rigor and folklore. Everybody knows that you use 42 C for 45 seconds, but nobody really knows why. It’s just what you do.

Despite all of this, we’ve gotten fairly good at doing science. But to get even better, we have to go beyond mythology and folklore. And getting beyond folklore requires change: changes in how we record data, changes in how we describe experiments, and perhaps most importantly, changes in how we publish results.

….The way we do science is changing. Experiments are getting much more complex; the data they produce is growing by orders of magnitude; and the methods that worked 50 or 60 years ago, when we barely knew what DNA was, are far from adequate now that we’re learning how to write the genetic code. Fortunately, we know how to get beyond mythology and folklore. Many of the tools we need exist, and the ones that don’t yet exist are being built. We’re creating a new generation of scientists who can collect all the data, share everything, and build the tools needed to facilitate that sharing. And when we’ve done that, we will have achieved a new scientific revolution — or, more precisely, fulfilled the promise of the first one.

This is a very interesting article - about the connection between quantum level events and biology. This is a field to watch.
The Origin of Life And The Hidden Role of Quantum Criticality
Quantum criticality must have played a crucial role in the origin of life say researchers who have found its hidden signature in a wide range of important biomolecules
One of the great puzzles of biology is how the molecular machinery of life is so finely coordinated. Even the simplest cells are complex three dimensional biochemical factories in which a dazzling array of machines fill the shop floor.

These machines pump, push, copy, and compute in a dance of extraordinarily detailed complexity. Indeed, it is hard to imagine how the ordinary processes of conduction and electron transport allow this complexity to emerge given the losses that inevitably arise, even in much simpler circuits.

Today, Stuart Kauffmann at the University of Calgary in Canada and a few pals provide some extraordinary new insight into how all this might happen. These show that most biomolecules are quantum critical conductors; their electronic properties are precisely tuned to the transition point between a metal and an insulator.

In other words, biomolecules belong to an entirely new class of conductor that is not bound by the ordinary rules of electron transport, a discovery that has profound implications for our understanding of the nature of life and its origin.

Quantum criticality describes the behaviour of electrons in large molecules when they occupy the exotic state that sits at the knife edge between conduction and insulation. When these molecules are conductors, some of their bound electrons are able to move freely under the influence of an electric field. By contrast, when these molecules are insulators, the electrons are not free to move.

The quantum critical state occurs when the electronic states are balanced between conduction and insulation.

For fans of Blade Runner and for everyone interested in the future. Here’s a lovely reflection with Roy Batty - only 6min and from 2012.
Rutger Hauer and Blade Runner - "30 years ago I saw the future"
Rutger Hauer arrives early at Centro Sperimentale di Cinematografia in Milan. There, a restricted number of journalists and students met the deep and glacial gaze of the legendary replicant Roy Batty, the character who went down in the history of cinema with "that" sentence that we have all said, at least once in our lifetime: «...I've seen things you people wouldn't believe...». The 30th anniversary of "Blade Runner", milestone of science fiction cinema and unquestioned masterpiece by Ridley Scott, is one of the reasons why Hauer was in Milan. In our report, the Dutch actor retraces the crucial moments of that extraordinary cinematographic experience.

This is not robotics or domesticated DNA - but it can change the employment landscape or military capabilities.
Robotic suit gives shipyard workers super strength
Workers building the world’s biggest ships could soon don robotic exoskeletons to lug around 100-kilogram hunks of metal as if they’re nothing

AT A sprawling shipyard in South Korea, workers dressed in wearable robotics were hefting large hunks of metal, pipes and other objects as if they were nothing.

It was all part of a test last year by Daewoo Shipbuilding and Marine Engineering, at their facility in Okpo-dong. The company, one of the largest shipbuilders in the world, wants to take production to the next level by outfitting staff with robot exoskeletons that give them superhuman strength.

Both these two visions of the Future are well worth the view
Here’s a 1987 vision of the future by Apple - only 6 min. We are quite there yet - but Google Now is closer than we think. This vision still leaves the computer on the desktop - like the home phone.
Knowledge Navigator

And this is the latest vision from Microsoft - 7 min video. This one lets us imagine the rapidly emerging world of augmented reality - 3D printing - the Internet of Things. This vision assumes ubiquitous wifi.
Productivity Future Vision
How could emerging technologies transform the way we get things done 5-10 years in the future? Watch Kat, a young independent marine biologist, and Lola, a corporate executive, work together in a highly interconnected and information rich future.

Speaking of Microsoft and visions of productivity at work - here is a 30 min interview about the IoT with the person who first coined the term.
Watch Modern Workplace to find out what the Internet of Things can do for your business (Episode 5)
In this episode of Modern Workplace, Kevin Ashton, the father of the Internet of Things and Frank Burkitt, the author of “The Strategists Guide to the Internet of Things” discussed how the Internet of Things is changing how people and technology interact. With 1.9 billion devices connected today and an estimated 50 billion projected by 2020, the opportunity to gain efficiencies, enable innovation and increase agility through connected devices is massive.
If this is interesting - previous episodes and other material can be found here:

Here’s an interesting vision of urban farming, using both shipping containers and modular greenhouses - that seems both feasible and scalable. The site has some clear designs and - how the design continues to evolve. This approach seems very doable and could be coming to a neighborhood near you soon. There are two 5 min videos as well.
Reinvisioning local food
The Farmery helps solve the inconveniences of farmer’s markets for consumers and farmers
For consumers, The Farmery makes it much easier to buy locally made goods. The Farmery will be open during ‘normal’ grocery hours, allowing consumers virtually any time of the day or night to buy locally made food. Also, because The Farmery is a full service grocery store, consumers won’t be forced to purchase goods in two separate places. Consumers can buy locally made products and virtually everything else they need from The Farmery.

For local farmers, selling produce, meats, and other goods to The Farmery offers advantages over selling to Farmer’s Markets. First, local producers won’t have to spend time arranging displays and packaging to sell their goods to The Farmery. The saved time can be spent on growing food or other more valuable tasks.

The Farmery has designed its distribution strategies to meet the needs of local producers, rather than the current supermarket attempts to fit local producers into an existing national distribution system.

For Fun
I had to share this - this is not laziness as much as it’s brilliant efficiency. The Pictures are worth way more than 1,000 words - that’s how efficient these people are.
Taking Laziness to a Whole New Level

Here is a 3 min video on the birth of a new word.
Phubbing: A Word is Born

1 comment:

  1. John, had written a comment - concentrating on the Harari Khaneman exchange - went on to publish it via google+, had a wrong password, came back and pfffft, it's all gone.... Sad.