Friday, April 17, 2015

Friday Thinking, 17 April 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

Generally, the conversational aspect of AR is a fairly recent focus. Many of the use cases exploring the technology’s potential value have to do with streamlining repetitive actions. Improving supply chain processes, reducing waste, and increasing operational efficiency are priorities for most organizations, and AR can help give some companies a substantial edge. From real-time inventory management to maintenance records, AR technologies provide greater detail and more supporting data, which can improve both efficiency and accuracy.

But efficiency is only one component of business competitiveness. Many roles that AR might supplement may soon be usurped by advanced robotics and other forms of automation. What is the value of AR when the people it is supposed to enhance are no longer needed to do the job? More fundamentally, in an increasingly complex and unpredictable world, many people consider increased efficiency secondary to the ability to collectively digest and act on rapid changes—in essence, the ability to learn.

In this scenario, AR is in a position to gain value. Collaboration is the bedrock of innovation, and AR enables us to learn faster by working together. To do so, we typically rely on fundamentally human capabilities—imagination, creativity, genuine insight, and emotional and moral intelligence—that are difficult or impossible to automate. In the same way that AR enables us to use data more deeply, it has the potential to help us communicate more deeply and meaningfully with each other.
John Hagel & John Seely Brown - Augmented Reality: Enabling Learning Through Rich Context


People often comment that the new leadership I propose couldn’t possibly work in ”the real world.”  I assume they are referring to their organization or government, a mechanistic world managed by bureaucracy, governed by policies and laws, filled with people who do what they’re told, who surrender their freedom to leaders and sit passively waiting for instructions. This “real world” craves efficiency and obedience.  It relies on standard operating procedures for every situation, even when chaos erupts and things are out of control.

This is not the real world.  This world is a manmade, dangerous fiction that destroys our capacity to deal well with what’s really going on.  The real world, not this fake one, demands that we learn to cope with chaos, that we understand what motivates humans, that we adopt strategies and behaviors that lead to order, not more chaos.

Here is the real world described by new science.  It is a world of interconnected networks, where slight disturbances in one part of the system create major impacts far from where they originate.  In this highly sensitive system, the most minute actions clan blow up into massive disruptions and chaos.  But it is also a world that seeks order.  When chaos erupts, it not only disintegrates the current structure, it also creates the conditions for new order to emerge.  Change always involves a dark night when everything falls apart.  Yet if this period of dissolution is used to create new meaning, then chaos ends and new order emerges.

This is a world that knows how to organize itself without command and control or charisma.  Everywhere, life self-organizes as networks of relationships.  When individuals discover a common interest or passion, they organize themselves and figure out how to make things happen.  Self-organizing evokes creativity and results, creating strong, adaptive systems. Surprising new strengths and capacities emerge.

In this world, the ‘basic building blocks’ of life are relationships, not individuals.  Nothing exists on its own or has a final, fixed identity.  We are all ‘bundles of potential.’  Relationships evoke these potentials.  We change as we meet different people or are in different circumstances.
Margaret J. Wheatley
The Real World: Leadership Lessons from Disaster Relief and Terrorist Networks- 2006


Local Motors recently demonstrated that it can print a good-looking roadster, including wheels, chassis, body, roof, interior seats, and dashboard but not yet drivetrain, from bottom to top in 48 hours. When it goes into production, the roadster, including drivetrain, will be priced at approximately $20,000. As the cost of 3-D equipment and materials falls, traditional methods’ remaining advantages in economies of scale are becoming a minor factor.

Here’s what we can confidently expect: Within the next five years we will have fully automated, high-speed, large-quantity additive manufacturing systems that are economical even for standardized parts. Owing to the flexibility of those systems, customization or fragmentation in many product categories will then take off, further reducing conventional mass production’s market share.
The 3-D Printing Revolution


Here’s an article from John Hagel and John Seely Brown about the future of learning - or at least important new avenues - by the next decade - AR will be a key way to learn language and any form of just-in-time need. With the emergence of AR and Virtual Reality (VR) producing Mixed Reality (MR) we are finally reaching the point where the content of the digital environment is no longer just the old media of the printing press. Work and learning will inevitably be within an immersive and interactive MR environment.
Augmented Reality: Enabling Learning Through Rich Context
In his 1992 novel “Snow Crash,”Neal Stephenson envisioned the Metaverse: a three-dimensional manifestation of the Internet in which people interact and collaborate via digitally-constructed avatars. In the decades since, technology has advanced to the point where such a place no longer seems like science fiction.

Stephenson’s Metaverse is a virtual reality space, a completely immersive computer-generated experience whose users have minimal ability to interact with the real world. In contrast to this fictional vision is today’s burgeoning field of augmented reality (AR), a technology that superimposes visual information or other data in front of one’s view of the real world.

One of the most well-known AR technologies, Google Glass, projects data onto the upper right corner of the wearer’s glasses lens, creating a relatively seamless interaction between that information and reality. Today, such technologies tend to get noticed for either their novelty value or their role in privacy concerns. In the longer term, they can have tremendous potential to change the way we interact with our technology and with each other.

When Google Glass was first released, many analysts focused on its potential to change the way media was created and consumed, viewing it essentially as a head-mounted smart phone. Since then, some people have reacted negatively to use of a device that can constantly film one’s surroundings or relay social media to the wearer in the middle of a conversation. When the devices were used in ways deemed intrusive, users sometimes received negative reactions from others. While the circumstances surrounding these instances of intrusive use may be considered controversial, they seem to have contributed to limiting AR’s potential as an integrated social media tool, at least for the time being.


Here is a new development from all five Deans at MIT - about the accelerating emergence and importance of Big Data and social physics. The question is how well is research in government organizations place to join and adopt new Big Data methodologies. Will government researchers have to wait until every worker is issued a mobile device before they begin to think about how social science methodologies should change?
Deans announce new Institute for Data, Systems, and Society
MIT-wide effort aims to bring the power of data to the people.
What do data scientists and social scientists have in common? Not nearly enough — yet. But now, MIT is creating a new institute that will bring together researchers working in the mathematical, behavioral, and empirical sciences to capitalize on their shared interest in tackling complex societal problems.

As announced today by the deans of all five of the Institute’s schools, MIT will officially launch the new Institute for Data, Systems, and Society (IDSS) on July 1. Offering a range of cross-disciplinary academic programs, including a new undergraduate minor in statistics, IDSS will be directed by Munther Dahleh, the William A. Coolidge Professor in the Department of Electrical Engineering and Computer Science.

While providing a structure and incentives for new alliances among researchers from across MIT, IDSS will become a central “home” for faculty from the Engineering Systems Division and a number of existing units, including the Laboratory for Information and Decision Systems and the Sociotechnical Systems Research Center. IDSS will also launch a new MIT center on statistics.

“The Institute for Data, Systems, and Society will be a platform for some of the most exciting research and educational activity in complex systems at MIT,” Provost Martin Schmidt says. “Its formation is the result of intensive consultations among more than three dozen faculty members over many months. Those consultations have helped define many of the challenges that need to be addressed. I am deeply grateful to Munther for his leadership throughout this process.”


Speaking about Big Data and Artificial Intelligence - here’s something about advances in programming.
Graphics in reverse
Probabilistic programming does in 50 lines of code what used to take thousands.
Most recent advances in artificial intelligence — such as mobile apps that convert speech to text — are the result of machine learning, in which computers are turned loose on huge data sets to look for patterns.

To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency, an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.

At the Computer Vision and Pattern Recognition conference  in June, MIT researchers will demonstrate that on some standard computer-vision tasks, short programs — less than 50 lines long — written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code.

“This is the first time that we’re introducing probabilistic programming in the vision area,” says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. “The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems.”


Here is an imminent implementation of Artificial Intelligence and a whole new source of Big Data - from which more ‘machine learning’ will arise.
The car that knows when you’ll get in an accident before you do
I’m behind the wheel of the car of the future. It’s a gray Toyota Camry, but it has a camera pointed at me from the corner of the windshield recording my every eye movement, a GPS tracker, an outside-facing camera and a speed logger. It sees everything I’m doing so it can predict what I’m going to do behind the wheel seconds before I do it. So when my eyes glance to the left, it could warn me there’s a car between me and the exit I want to take.

A future version of the software will know even more about me; the grad students developing what they’ve dubbed Brains4Cars plan to let drivers connect their fitness trackers to the car. If your health tracker “knows” you haven’t gotten enough sleep, the car will be more alert to your nodding off.

“Our entire focus is to put a lot of sensors inside the car to monitor the driver, and build predictive systems,” Ashesh Jain, one of the project leads, told me. The car should “know in advance what you’re going to do.”

The team has gathered almost 1,200 miles worth of driving data on freeways and residential areas in Nevada and California, along with inside-the-car video from 10 drivers — 11, now, if you include me. They’re analyzing that video with facial recognition and eye tracking software to figure out what signals in your face correlate to certain driving maneuvers, which they can track via other sensors in the car. If you’re looking at your left rearview mirror, for instance, chances are you’re going to switch into the left lane. In a paper published this week describing the first stage of the project, the researchers report that they’ve build an algorithm capable of predicting driving maneuvers 3.5 seconds before they occur, with about 80 percent accuracy. It’s more accurate and can predict farther into the future that other experimental systems to date.


Here is something similar coming from the EU.
Europe's research commissioner lays out his ambitions
Carlos Moedas on European funding models, diplomacy and scientific advice.
Last November, Portuguese engineer-turned-economist Carlos Moedas was plucked from managing his country’s budget-cutting austerity programme to take charge of the research portfolio at the European Commission in Brussels.

Five months into his five-year term as research commissioner, Moedas spoke to Nature about his hopes and ambitions for the scientific programmes run by the European Union (EU), particularly the huge seven-year €80-billion (US$86-billion) Horizon 2020 (H2020) research programme, which runs until 2020. Moedas wants scientists to change their mentality for H2020, breaking free of individual silos and including more social science. But he is already facing complaints that money is being stripped from the programme to finance other European initiatives, such as the proposed €16-billion European Fund for Strategic Investment (EFSI), a Europe-wide bid to stimulate the region’s economy. The following is an edited version of the interview.


Talking about the EU - Here is a one hour video from the Institute for New Economic Thinking and also representing the OCDE. Yanis who is the current finance minister of Greece was also the economist in residence of one of the world’s biggest gaming companies - Valve - of the ‘Handbook for New Employees’ fame (Google to see an amazing company culture). One of the realizations Yanis had while at Valve - was that the virtual world economy of the MMOG suggested the very real possibility of an emergence of a post-labor-force economy. This discussion is far from the experiences he had at Valve - However, the insights gleaned can be seen in the willingness he has to ‘think outside the box’.
Yanis Varoufakis and Joseph Stiglitz


Speaking of national responses to the digital environment.
Why governments must embrace the new global digital reality
On Jan. 28, 2011, in the middle of a popular uprising, the president of Egypt turned off the Internet. This striking display of state power is well known. Less well known is how the Internet was turned back on.

Around the world, hackers and activists who belong to a collective known as Telecomix began to re-establish network connections in Egypt. They arranged with a hacker-friendly French Internet service to provide hundreds of dial-up modem lines, sought out amateur-radio enthusiasts to broadcast short logistical messages, faxed leaflets to university campuses and cyber cafés explaining how to get around the blackouts, and used the same tactic to get news out of Egypt.

Telecomix is one of a new breed of actor taking part in international conflict. When the Arab Spring moved to Syria, this new breed included hackers from Anonymous who took down government infrastructure, crisis mappers who crowd-sourced the analysis of tank locations, citizens who streamed the bombardment of cities to YouTube, and networks of amateur experts who used these videos to trace the origins of munitions.

These groups do not fit comfortably in traditional categories: They are not nation-states, formal institutions or rogue individuals. Instead, they share characteristics and capabilities that are fundamentally technology-enabled.

They are formless. You can’t join them, because they are not organizations; you can’t lead them, because there is no leader; and most engage while cloaked in encryption and pseudonyms. All this stands in direct contrast with the hierarchical structures that give traditional institutions strength.


And if we think it’s only governments that have to transform - here’s an interesting HBR article by Gary Hamel on a pretty large and long-lived organizations.
The 15 Diseases of Leadership, According to Pope Francis
Pope Francis has made no secret of his intention to radically reform the administrative structures of the Catholic church, which he regards as insular, imperious, and bureaucratic. He understands that in a hyper-kinetic world, inward-looking and self-obsessed leaders are a liability.

Last year, just before Christmas, the Pope addressed the leaders of the Roman Curia — the Cardinals and other officials who are charged with running the church’s byzantine network of administrative bodies. The Pope’s message to his colleagues was blunt. Leaders are susceptible to an array of debilitating maladies, including arrogance, intolerance, myopia, and pettiness. When those diseases go untreated, the organization itself is enfeebled. To have a healthy church, we need healthy leaders.

Through the years, I’ve heard dozens of management experts enumerate the qualities of great leaders. Seldom, though, do they speak plainly about the “diseases” of leadership. The Pope is more forthright. He understands that as human beings we have certain proclivities — not all of them noble. Nevertheless, leaders should be held to a high standard, since their scope of influence makes their ailments particularly infectious.

The Catholic Church is a bureaucracy: a hierarchy populated by good-hearted, but less-than-perfect souls. In that sense, it’s not much different than your organization. That’s why the Pope’s counsel is relevant to leaders everywhere.

[These] ...diseases and temptations which can dangerously weaken the effectiveness of any organization - are:
The disease of thinking we are immortal, immune, or downright indispensable
The disease of Excessive busyness
Then there is the disease of mental and [emotional] “petrification.”
The disease of excessive planning and of functionalism
The disease of poor coordination
There is also a sort of “leadership Alzheimer’s disease.”
The disease of rivalry and vainglory
The disease of existential schizophrenia
The disease of gossiping, grumbling, and back-biting
The disease of idolizing superiors
The disease of indifference to others
The disease of a downcast face
The disease of hoarding
The disease of closed circles
Lastly: the disease of extravagance and self-exhibition


Here is a 23 min video from John Hagel talking among other things about the need to redesign our organizations in order to scale learning as the primary principle - rather than efforts to scale efficiency in a rapidly changing world.
John Hagel Deep Dive Dinner Feb 2014
John Hagel, co-author of The Power of Pull and co-chair of Deloitte's Center for the Edge, shares his latest insights from the edges, and discusses the power of narrative and the passion of the explorer.


Here is a 43 min video by one of the co-founders of the startup accelerator-incubator Y-Combinator. Is one of one of the most coveted accelerators due to its network and the success of its alums. As an accelerator it has funded an overall set of 842 companies to date. Companies include —Dropbox, Airbnb, Reddit,Stripe, Twitch, Homejoy and more—the market capitalization of Y Combinator startups exceeds $30 billion. There are four Y Combinator companies that are worth more than $1 billion at the moment, and 32 companies valued at more than $100 million.
Before the Startup (Paul Graham)
Paul Graham delivers an informative (and highly amusing) talk, addressing counterintuitive parts of startups - How to Start a Startup.
This is Lecture 1 in the same series - 44 min video worth the view
Lecture 1 - How to Start a Startup (Sam Altman, Dustin Moskovitz)
Sam Altman, President of Y Combinator, and Dustin Moskovitz, Cofounder of Facebook, Asana, and Good Ventures, kick off the How to Start a Startup Course. Sam covers the first 2 of the 4 Key Areas: Ideas, Products, Teams and Execution; and Dustin discusses Why to Start a Startup.


Speaking about startups here’s a great Google Talk video 39 min.
Reputation Economics: Why Who You Know is Worth More Than What You Have
Joshua Klein details the ins and outs of reputation economics and how it incorporates all of the intangibles that make us human beings, such as the things we value, what kind of customer service we prefer, and what types of goods and services speak to us as individuals. Readers will be shocked to learn why exchanges in the realm of reputation economics don't require financial systems to operate, and that a person's reputation can be used as a powerful and effective tool to ensure a consumer's welfare, without influence from massive institutions.

For the first time, the internet has removed all the limiting factors that have kept these methods of interaction underground, and Klein describes how they are being used by major corporations as marketing tools, and by the general public to bypass traditional sales channels. He details the ultra-current data analyses industry, including what types of information corporations gather about their customers, why they so often use it incorrectly, and how savvy consumers and small retailers are creating a symbiosis that works best for all parties.


Here’s a recent article on the imminent emergence of 3D manufacturing into the mainstream. This is an excellent summary of the state of today for anyone interested - in the transformation and disruption of how we make many things.
The 3-D Printing Revolution
Industrial 3-D printing is at a tipping point, about to go mainstream in a big way. Most executives and many engineers don’t realize it, but this technology has moved well beyond prototyping, rapid tooling, trinkets, and toys. “Additive manufacturing” is creating durable and safe products for sale to real customers in moderate to large quantities.

The beginnings of the revolution show up in a 2014 PwC survey of more than 100 manufacturing companies. At the time of the survey, 11% had already switched to volume production of 3-D-printed parts or products. According to Gartner analysts, a technology is “mainstream” when it reaches an adoption level of 20%.

Among the numerous companies using 3-D printing to ramp up production are GE (jet engines, medical devices, and home appliance parts), Lockheed Martin and Boeing (aerospace and defense), Aurora Flight Sciences (unmanned aerial vehicles), Invisalign (dental devices), Google (consumer electronics), and the Dutch company LUXeXcel (lenses for light-emitting diodes, or LEDs). Watching these developments, McKinsey recently reported that 3-D printing is “ready to emerge from its niche status and become a viable alternative to conventional manufacturing processes in an increasing number of applications.” In 2014 sales of industrial-grade 3-D printers in the United States were already one-third the volume of industrial automation and robotic sales. Some projections have that figure rising to 42% by 2020.

In addition to basic plastics and photosensitive resins, these already include ceramics, cement, glass, numerous metals and metal alloys, and new thermoplastic composites infused with carbon nanotubes and fibers. Superior economics will eventually convince the laggards. Although the direct costs of producing goods with these new methods and materials are often higher, the greater flexibility afforded by additive manufacturing means that total costs can be substantially lower.


Speaking of revolutions - here’s a short article describing the emerging revolution - Industry 4.0
From Germany to the World: Industry 4.0
IIoT by another name?  That’s the essence of Industry 4.0. It may be a measure of the importance and potential of the new wave of technology revolutionizing manufacturing that it has spawned so many initiatives. From Germany, a nation with a nearly unmatched reputation in manufacturing, comes Industry 4.0.

With a name first proposed only in 2011 at the Hanover Fair, plans and a report soon followed in 2013, leading eventually to formal adoption of the term by the German government. The concept behind the name divides the history of industry into four phases. Mechanization was the first (for example, the third was digitization – the initial impact of computing. The fourth (and, thus the name, Industry 4.0) involves bringing intelligence, connectivity and much broader computerization to manufacturing.

According to the April 2013 Recommendations for implementing the strategic initiative INDUSTRIE 4.0 prepared for the German Federal Ministry of Education and Research:

In the future, businesses will establish global networks that incorporate their machinery, warehousing systems and production facilities in the shape of Cyber-Physical Systems (CPS). In the manufacturing environment, these Cyber-Physical Systems comprise smart machines, storage systems and production facilities capable of autonomously exchanging information, triggering actions and controlling each other independently. This facilitates fundamental improvements to the industrial processes involved in manufacturing, engineering, material usage and supply chain and life cycle management….

And a related article from the World Economic Forum
Why the factory of the future is online
Distributed Manufacturing” is one of 10 emerging technologies of 2015 highlighted by the World Economic Forum’s Meta-Council on Emerging Technologies.

With the rise of 3D printing has come the rise of a wider trend: distributed manufacturing, in which products are fabricated by the customer themselves or in facilities that are much smaller and more local than traditional factories. We spoke to Jeff Carbeck of Deloitte Consulting about the growth of distributed manufacturing and what it means.


Here’s some more interesting analysis on the imminent energy shift.
Why Energy Storage is About to Get Big – and Cheap
Storage of electricity in large quantities is reaching an inflection point, poised to give a big boost to renewables, to disrupt business models across the electrical industry, and to tap into a market that will eventually top many of tens of billions of dollars per year, and trillions of dollars cumulatively over the coming decades.

Energy storage is hitting an inflection point sooner than I expected, going from being a novelty, to being suddenly economically extremely sensible. That, in turn, is kicking off a virtuous cycle of new markets opening, new scale, further declining costs, and additional markets opening.

To elaborate: Three things are happening which feed off of each other.
The Price of Energy Storage Technology is Plummeting. Indeed, while high compared to grid electricity, the price of energy storage has been plummeting for twenty years. And it looks likely to continue.

Cheaper Storage is on the Verge of Massively Expanding the Market.  Battery storage and next-generation compressed air are right on the edge of the prices where it becomes profitable to arbitrage shifting electricity prices – filling up batteries with cheap power (from night time sources, abundant wind or solar, or other), and using that stored energy rather than peak priced electricity from natural gas peakers.This arbitrage can happen at either the grid edge (the home or business) or as part of the grid itself. Either way, it taps into a market of potentially 100s of thousands of MWh in the US alone.

A Larger Market Drives Down the Cost of Energy Storage. Batteries and other storage technologies have learning curves. Increased production leads to lower prices. Expanding the scale of the storage industry pushes forward on these curves, dropping the price. Which in turn taps into yet larger markets….


Here’s something not yet ready but on its way - a boon for many of us.
Genetically Modified Bacteria Can Stop Us From Overeating, And Thus Prevent Obesity
Anti-obesity measures are nothing new in the realm of health-dom; but most of them do tend to fall flat because of their concerted nature that doesn’t comply with the natural microbiome of our bodies. By this term ‘microbiome’ we obviously mean our internal ecosystem that consists of a host of pathogenic and symbiotic microorganisms (including bacteria). And this time around, scientists may have potentially found a solution – with their genetically engineered variety of fat-fighting bacteria that can take up residence inside our body. To that end, a group of researchers at the Vanderbilt University have modified a strain of E. coli, and then had it tested on mice. The results were encouraging, with the bacteria having a favorable effect for around 6 weeks inside the animal’s gut.

In terms of its working scope, the engineered bacteria was embedded with a gene for N-acyl-phosphatidylethanolamines. These molecules aid in generating an appetite-suppressing compound that leads to the wholesome effect of ‘feeling full’ after consuming food. Interestingly, this compound is supposed to be produced naturally by our intestines, but is still secreted in scarce quantities for some people. This creates a sensation (or rather ‘craving’) for more food, in spite of actually fulfilling their biological need for food (energy). In other words, these people tend to ‘overeat’ after being ‘full’ – thus leading to overweight or obese conditions.

As for the tests, the bacteria was delivered to the mice by simply adding it to water. The results showed that the mice treated with the bacteria showed a substantial 15 percent less weight than mice that were not treated with the bacteria – with both animal groups being fed a high-fat diet….


Speaking of the relationship between our microbial profile and our health - this is a fascinating link between autoimmune disturbances and mental health.
Yes, You Can Catch Insanity
A controversial disease revives the debate about the immune system and mental illness.
One day in March 2010, Isak McCune started clearing his throat with a forceful, violent sound. The New Hampshire toddler was 3, with a Beatles mop of blonde hair and a cuddly, loving personality. His parents had no idea where the guttural tic came from. They figured it was springtime allergies.

Soon after, Isak began to scream as if in pain and grunt at his parents and peers. When he wasn’t throwing hours-long tantrums, he stared vacantly into space. By the time he was 5, he was plagued by insistent, terrifying thoughts of death. “He would smash his head into windows and glass whenever the word ‘dead’ came into his head. He was trying to drown out the thoughts,” says his mother, Robin McCune, a baker in Goffstown, a small town outside Manchester, New Hampshire’s largest city.

Isak’s parents took him to pediatricians, therapy appointments, and psychiatrists. He was diagnosed with a host of disorders: sensory processing disorder, oppositional defiance disorder, and obsessive-compulsive disorder (OCD). At 5, he spent a year on Prozac, “and seemed to get worse on it,” says Robin McCune.

As his behaviors worsened, both parents prepared themselves for the possibility that he’d have to be home-schooled or even institutionalized. Searching for some explanation, they came across a controversial diagnosis called pediatric autoimmune neuropsychiatric disorders associated with streptococci, or PANDAS. First proposed in 1998, PANDAS linked the sudden onset of psychiatric symptoms like Isak’s to strep infections.

….as Isak’s illness dragged into its fourth year, they reconsidered the possibility. The year before the epic meltdowns began, his older brother had four strep infections; perhaps it was more than coincidence. In September 2013, three and a half years after his first tics appeared, a pediatric infectious-disease specialist in Boston put Isak on azithromycin, a common antibiotic used to treat food poisoning, severe ear infections, and particularly persistent cases of strep throat.

The results were dramatic. Isak’s crippling fear vanished within days. Then he stopped grunting. Less than a week after starting his son on the antibiotic, Adam McCune saw Isak smile for the first time in nearly four years. After a few weeks, the tantrums that had held the family hostage for years faded away.


Speaking of new lifeforms - here’s an interesting discovery.
Hiding in Plain Sight
Researchers using metagenomics and single-cell sequencing identify a potential new bacterial phylum.
Studies on 16s ribosomal RNA (rRNA) sequences have opened scientists’ eyes to the complexity of microbial communities, but some bacteria evade detection. At the US Department of Energy (DOE) Joint Genome Institute User Meeting held in Walnut Creek, California, last week, researchers announced the genomic identification of a potential new bacterial phylum, Candidatus Kryptonia, based on their study of samples isolated from four hot springs located in North America and Asia. Altogether, the DOE team sequenced 22 Kryptonia genomes.

“It’s always difficult to claim absolutely a new lineage until you’ve done some biochemical tests,” said microbial ecologist Jack Gilbert of Argonne National Laboratory and the University of Chicago, who was not involved with the study, “but, genomics-wise, this thing appears to fit outside of our current understanding.”

Genomic analyses place Kryptonia in the Bacteroidetes superphylum, whose members thrive in the gut and in marine environments. If confirmed, Kryptonia would be the first extreme thermophile found in this group. Kryptonia appears to have acquired this characteristic through horizontal gene transfer from Archaea.


More advances in the world of biosensors and IoT
Inkjet printer could produce simple tool to identify infectious disease, food contaminants
Consumers are one step closer to benefiting from packaging that could give simple text warnings when food is contaminated with deadly pathogens like E. coli and Salmonella, and patients could soon receive real-time diagnoses of infections such as C. difficile right in their doctors’ offices, saving critical time and trips to the lab.

Researchers at McMaster University have developed a new way to print paper biosensors, simplifying the diagnosis of many bacterial and respiratory infections.
The new platform is the latest in a progression of paper-based screening technologies, which now enable users to generate a clear, simple answer in the form of letters and symbols that appear on the test paper to indicate the presence of infection or contamination in people, food or the environment.

“The simplicity of use makes the system easy and cheap to implement in the field or in the doctor’s office,” says John Brennan, director of McMaster’s Biointerfaces Institute, where the work was done with biochemist Yingfu Li and graduate student Carmen Carrasquilla.


Here is a very interesting development with a lot of potential for future political action when VR, AR and Mixed Reality is more widespread. There is three short videos and some picture as well.
First Hologram Protest in History Held Against Spain’s Gag Law
Spanish citizens held the first hologram protest in history in order to protest without violating the new draconian guidelines of the National Security Act, the new amendments to the Penal Code and the Anti-terror law.

According to the recently approved “triad gag“, the citizens of Spain cannot protest against the Congress or hold meetings in public spaces, plus they have to ask permission from the authorities whenever they wish to protest publicly.

“If you are a person you can not express yourself freely, you can only do that here if you become a hologram,” says a woman in the video released by the movement “Hologramas para la Libertad.”


In relation to the virtual exchange - here’s some competition to the rapidly emerging world of new trans-sovereign currencies.
A New Competitor for Bitcoin Aims to Be Faster and Safer
A Stanford professor claims to have invented a Bitcoin-like system that can handle payments faster and with more security.
The total value of the digital currency Bitcoin is now approximately $3.4 billion, and many companies and investors are working to prove that the technology can make financial services cheaper and more useful.

But Stanford professor David Mazières thinks he has a faster, more flexible, and more secure alternative. If Mazières is correct, his technology could make digital payments and other transactions cheaper, safer, and easier—particularly across borders. He released the design for his system in a white paper last Wednesday.

Bitcoin transactions rely on software run on thousands of computers linked up over the Internet. That distributed network uses a set of rules and cryptographic principles to reliably verify transactions even though no one person or organization is in control.

The system was introduced to the world in 2008 in a technical paper released under the pseudonym Satoshi Nakamoto. Its design was significant for showing a way for a pool of contributors that don’t necessarily trust each other to collectively create a system to verify transactions. But the way Bitcoin achieves this makes it slower and less secure than is ideal for a system meant to become part of the world’s financial infrastructure, Mazières says.


Here is something really cool. An interactive model to play with.
Welcome to the Global Calculator
The Global Calculator is a model of the world's energy, land and food systems to 2050. It allows you to explore the world's options for tackling climate change and see how they all add up. With the Calculator, you can find out whether everyone can have a good lifestyle while also tackling climate change.


And here is a possibility of a new human sense - I would love to have this now.
Researchers Use Electrodes for “Human Cruise Control”
A study put people on autopilot by electrically stimulating their thigh muscles.
Sure, you can get directions by looking at a map on your phone or listening to turn-by-turn navigation. But what if you could just walk from point A to point B in a new place without having to look at a device or even think about whether you’re on the right course?

A group of researchers from three German universities is working on just that. In a study, they electrically stimulated a leg muscle to nudge subjects to turn left or right along twisty routes in a park. The work, which researchers refer to as “human cruise control,” will be presented next week in a paper at the CHI 2015 human-computer interaction conference in Seoul, South Korea.

Max Pfeiffer, a coauthor of the paper and graduate student at the University of Hannover, says the idea is to eliminate the distraction of having to constantly pay attention to your phone while finding your way. If the researchers can figure out how to make the technology reliable enough and get people comfortable using it, it could also be helpful for exercise workouts or guiding emergency responders in situations where they can’t see well.

The researchers placed electrodes on participants’ sartorius muscles, which run diagonally across the thighs. These connected to a commercially available electrical muscle stimulation device and a Bluetooth-equipped control board that were worn at the waist.


For Fun
This is hilarious in a number of ways beyond the obvious one of trying to ban humour on the Internet.
The Kremlin Declares War on Memes
Russian censors have determined that one of the most popular forms of Internet meme is illegal. According to Roskomnadzor, the Kremlin’s media watchdog, it’s now against the law to use celebrities’ photographs in a meme, “when the image has nothing to do with the celebrity’s personality.”

The new policy comes on the heels of a court decision in Moscow, where a judge ruled that a particular photo meme violates the privacy of Russian singer Valeri Syutkin. The court’s decision targets an article on Lurkmore, a popular Wikipedia-style Russian website that focuses on Internet subcultures and memes.