Thursday, February 12, 2015

Friday Thinking, 13 February 2015

Hello all –Friday Thinking is curated in the spirit of sharing. Many thanks to those who enjoy this. J

A company manager is flying across the desert in a hot air balloon when he realizes he is lost. He calls down to a man riding a camel below him and asks where he is.
The man replies “You’re 42 degrees and 12 minutes, 21.2 seconds north, 122 degrees , 10 minutes west, 212 metres above sea level, heading due east by north east.”
“Thanks,” replies the balloonist. “By the way, are you a data analyst?”
“Yes,” replies the man, “how did you know?”
“Everything you told me was totally accurate, you gave me way more information than I needed and I still have no idea what I need to do.”
“I’m sorry,” replied the camel-riding analyst. “By the way, are you a company manager?”
“Yes,” said the balloonist, “how did you know?”
“Well,” replied the analyst, “You’ve got no idea where you are, no idea what direction you’re heading in, you got yourself into this fix by blowing a load of hot air, and now you expect me to get you out of it.”
The Best Big Data Jokes Ever!


In 1905, a young Albert Einstein shocked the world.  In one miracle year, he overturned the prevailing assumptions of his day and changed how we see the universe, transforming forever how we think of time, space, mass, energy and light.  He paved the way for our modern world.

Yet 22 years later it was Einstein’s turn to be caught in the mire of his own assumptions.  In a famous round of debates with Niels Bohr, he was unable to accept the consequences of the quantum world that, in fact, he had made possible in 1905, insisting that “God does not play dice with the universe.”

Einstein’s problem wasn’t grasping the importance of a new idea, but accepting an entirely new platform for physics—one which led to things like lasers, microprocessors and iPhones —and it doomed the rest of his career.  

Today, we all face a similar dilemma.  New platforms require us not to merely alter our behavior, but our assumptions about how the world works.

..Some years ago, I was talking to a friend about how she ran her business.  “I don’t allow Facebook or Skype in the office” she said, “it just allows people to fool around when they should be working.”  She saw social media as a threat to employee productivity and was determined to squash it.

When I pointed out that she was on Skype all the time and that, in fact, it was our primary mode of communication, she replied, “That’s different.  I need to talk to a lot of people in different countries and Skype is far more efficient.”  (These days, she seems to like Facebook messenger too).

Today, social media is my primary news source for many areas I’m  interested in.  That’s not a function of the platforms themselves, but rather of who I’m connected to—experts in particular fields who have ready access to information that would be hard to find anywhere else.
The New Age Of Platforms


The real invention is 4 billion years old, that's the evolutionary age of the ribosome. To understand the ribosome, think about a child playing with Lego bricks and compare it to a state-of-the-art 3D printer. The child and the ribosome do much the same thing.   
                      
When the child assembles Lego bricks, the first attribute is metrology that comes from the parts. When you snap the bricks together, you don't need a ruler to play Lego; the geometry comes from the parts. What it means is that a child can make a Lego structure bigger than themself. The same way in the ribosome—the Lego bricks are amino acids, and the ribosome assembles amino acids to elongate a protein. You can make an elephant one amino acid at a time because the geometry comes from the parts. In a 3D printer today, what you can make is limited by the size of the machine. The geometry is external.

The second difference—now we come to the Shannon part—is the Lego tower is more accurate than the child because the constraint of assembling the bricks lets you detect and correct errors. The tower is more accurate than the motor control of the child….
  
...There are twenty amino acids. With those twenty amino acids you make the motors in the molecular muscles in my arm, you make the light sensors in my eye, you make my neural synapses. The way that works is the twenty amino acids don't encode light sensors, or motors. They’re very basic properties like hydrophobic or hydrophilic. With those twenty properties you can make you. In the same sense, digitizing fabrication in the deep sense means that with about twenty building blocks—conducting, insulating, semiconducting, magnetic, dielectric—you can assemble them to create modern technology.      
Neil Gershenfeld - Digital Reality


This is a fascinating window into the past 2004 the start of a 4 year experiment in a radical transparent sharing from workers at Microsoft about working at Microsoft and sometimes a very frank sharing. Imagine government workers with such freedom???
The 9 Guys - Who We Are
Welcome to Channel 9.  We are five guys at Microsoft who want a new level of communication between Microsoft and developers. We believe that we will all benefit from a little dialogue these days. This is our first attempt to move beyond the newsgroup, the blog, and the press release to talk with each other, human to human.

Here is the first video MUST VIEW after this inaugural launch of the public opening of Microsoft’s internal environment. When the rest of the Niners saw this recording they were all in shock: “This isn’t even boring! oh my gosh, it looks like Scoble was right, work related interviews with Microsoft staff may actually be worth putting online”. Could the public service try to do this - as a way of sharing learning-in-the-doing, capturing knowledge via exit interviews and so much more??? Something to think about.
Bill Hill: Homo sapiens 1.0 - The world's most important operating system
Posted: Mar 26, 2004
The most important operating system developers write software for is not Windows or OSX or Linux or Android. It's Homo sapiens 1.0. We make software for people first. Very wise words from a very wise soul, Bill Hill.

Bill passed away on October 16, 2012 from a sudden heart attack. We are all shocked and heartbroken. He was a very special member of the Channel 9 family and his contributions to C9, Microsoft and the industry are legendary. His devotion to learning - and to making knowledge readily available to everybody - are a testament to who he was as a person and scholar. Bill was deeply human and unusually brilliant; an iconoclastic mind with a heart of gold.

This short video clip is from 2004. It is rife with Bill's wisdom and captures the essence of his passion for reading, writing, learning and knowledge.


If we take the idea of Channel 9 much further - we could imagine a new form of Knowledge graph for any organization. This is a short Harvard Business Review article.
The Rise of Social Graphs for Businesses
If you’re a savvy social media user then you’ve already figured out that the knowledge a tool like Facebook is able to gather about your social connections is not only valuable to you. For you, Facebook’s ability to depict your network of friends and the varying strengths of those relationships supports all your mutual information sharing. For others — third parties — this “social graph” makes it possible to make personalized recommendations to you, and everyone else. For example, TripAdvisor leverages Facebook’s social graph to ensure that, when you are looking for reviews of hotels, restaurants, and so forth, any reviews posted by people you know appear right at the top.

For the social network companies, it didn’t take long to realize that the latter form of value creation should be the real focus of their business models. Early ventures like MySpace primarily focused on the social activity among their account holders, working to provide better tools to help them manage their relationships. Today’s social networks see social tools not as their end product but as a means for acquiring data. Facebook, in particular, saw the big opportunity in the “information exhaust” produced by all that user activity to produce a higher-level intelligence layer that would be useful to other businesses. Having graphed its users’ relationships and interactions, it could offer anyone else interested in those users the insight to reach them with highly targeted services.

Let’s say, however, that you are a business that would like to see that kind of social graph of the interactions among enterprises and not just individuals – perhaps because you sell to business customers, or perhaps because of your need to deal with suppliers. All businesses operate within their own networks of vendors, partners, clients, competitors, and other entities — the favored term these days is to talk of their “ecosystems.” Wouldn’t that be a valuable space to map?

This is the next step in the evolution of the social graph — let’s call it the emergence of the “commercial graph” — and it is happening now. Commercial graphs depict relationships between businesses, based on their actual interactions as they are captured digitally. And they support highly relevant information sharing, analogous to the TripAdvisor example above. Commercial graphs will help businesses manage their own partner relationships better, and also help third parties to those ecosystems understand them and spot ways to make targeted offers to those within them.


Speaking of a knowledge graph - here’s a wonderful open letter to the UK about legislating ‘backdoors’ on all software.
An Open Letter to Prime Minister Cameron
20th-century solutions won’t help 21st-century surveillance
Why a seemingly sensible proposal to compel back doors in Internet communications apps is a bad idea.
Heads of government bear the burden of keeping their populaces safe. That’s a crushing responsibility. Police solve violent crimes — and intelligence agencies predict and avert them — largely by intercepting the conversations of people conspiring to get away with them.

For at least thirty years democracies have kept eavesdropping within bounds by requiring a warrant or some other form of meaningful review before setting up something like a wiretap. As telephone companies upgraded to digital (but still not Internet-based) networks in the 1990s, governments around the world began to require that the new networks still allow for authorities to listen in to calls. The rationale was simple and generally uncontroversial: so long as the government respected the rule of law, its demands for information shouldn’t be trumped by new technological facts on the ground.

Why, then, you reasonably ask, should that long-established balance between security and privacy be disturbed simply because the internet has replaced telephony? The answer it, turns out, is that baking government access into all Internet apps will in fact not extend the long-established balance between security and privacy to all mediums of communication. It will upend it.
Here are four reasons why:


Here’s something that is fantastic - an incredibly influential presentation despite never directly influencing the course of events. This “Mother of all demos” recorded in 1968!!! makes very clear that ‘the whole CUI - mouse thing did not originate at Xerox Parc (subsequently giving it to Steve Jobs & Bill Gates) - Xerox got it all from this presentation. An utterly mind-blowingly ahead of its time video, ‘the moonshot of computing’ and it all became incorporated into what we have today. Just over 1 ½ hrs.
The Mother of All Demos, presented by Douglas Engelbart (1968)
"The Mother of All Demos is a name given retrospectively to Douglas Engelbart's December 9, 1968, demonstration of experimental computer technologies that are now commonplace. The live demonstration featured the introduction of the computer mouse, video conferencing, teleconferencing, hypertext, word processing, hypermedia, object addressing and dynamic file linking, bootstrapping, and a collaborative real-time editor."


So after the mother of all demos which introduced us to the GUI and the mouse almost 50 years ago - we are on the verge of finally replace the content of this interface - ‘the printing press’ and now provide the immersive and interactive medium of digital environment itself. There’s a live demo video about 6min that is a must see.
Now if we think the MOOC is disrupting how we conceive of delivering education - what’s will the virtual & augmented realities bring to the ‘mix’?
I Just Tried Microsoft's Remarkable Holographic Headset — Here's What It's Like
I just had a 40-minute in-person demonstration of HoloLens, Microsoft's new computer headset, and I'm convinced that personal computing is on the verge of a major change.

In 10 years or so, people will be using head-mounted displays that project 3D images that you can interact with in actual space.

It's going to be a huge leap over the flat-screen computing that we've all become used to over the past 30 years. It's so much obviously better that once people try it, there will be no going back.

This was the second time in two months that I felt as if I were glancing into the future. The first was when I tried on the latest version of the Oculus Rift, Facebook's virtual-reality headset. It reminded me of that "wow" feeling I had the first time I tried an iPhone back in 2007.

HoloLens and Oculus are similar but distinct. Oculus Rift is virtual reality, which means the image seems to surround you entirely, and you don't see any part of the real world.

HoloLens is augmented reality, which means it projects images on top of the real world.
(It doesn't really project holograms everybody can see — to see the images, you need to be wearing the headset or looking at a computer display of what the viewer is seeing.) The goggles, or glasses, are translucent. It's a little like Google Glass but with actual glass and much more immersive.


New research links gesture and language learning - but if we think about how we think it may seem obvious that learning is enable when we can ‘grapple’ with what we are trying to learn. The close coordination between hand-eye (hand-mind) should make it obvious that speaking with gesture is like virtually playing with our speech-thoughts.
Learning with all the senses: Movements and images facilitate vocabulary learning
Scientists from the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig have used Vimmish, an artificial language specifically developed for scientific research, to study how people can best memorise foreign-language terms.

According to the researchers, it is easier to learn vocabulary if the brain can link a given word with different sensory perceptions. The motor system in the brain appears to be especially important: When someone not only hears vocabulary in a foreign language, but expresses it using gestures, they will be more likely to remember it. Also helpful, although to a slightly lesser extent, is learning with images that correspond to the word. Learning methods that involve several senses, and in particular those that use gestures, are therefore superior to those based only on listening or reading.


Here’s an interesting articles about a new way to extend our senses - this one may extend our proprioception from containment within our bodies to a sense of self-orientation-in-larger-environment.
Researchers equip humans with magnetic sense
Scientists from Germany and Japan have developed a new magnetic sensor, which is thin, robust and pliable enough to be smoothly adapted to human skin, even to the most flexible part of the human palm. This is feeding the vision to equip humans with magnetic sense.

Magnetoception is a sense which allows bacteria, insects and even vertebrates like birds and sharks to detect magnetic fields for orientation and navigation. Humans are however unable to perceive magnetic fields naturally. Dr. Denys Makarov and his team have developed an electronic skin with a magneto-sensory system that equips the recipient with a "sixth sense" able to perceive the presence of static or dynamic magnetic fields. These novel magneto-electronics are less than two micrometers thick and weights only three gram per square meter; they can even float on a soap bubble.

The new magnetic sensors withstand extreme bending with radii of less than three micrometer, and survive crumpling like a piece of paper without sacrificing the sensor performance. On elastic supports like a rubber band, they can be stretched to more than 270 percent and for over 1,000 cycles without fatigue. These versatile features are imparted to the magnetoelectronic elements by their ultra-thin and –flexible, yet robust polymeric support.


Now here’s another interesting development - that one could imagine blending in with VR in the next couple of decades. This is a 19 min TED TALK.
Brain-to-brain communication has arrived. How we did it
You may remember neuroscientist Miguel Nicolelis — he built the brain-controlled exoskeleton that allowed a paralyzed man to kick the first ball of the 2014 World Cup. What’s he working on now? Building ways for two minds (rats and monkeys, for now) to send messages brain to brain. Watch to the end for an experiment that, as he says, will go to "the limit of your imagination."


So if we are already enabling brain-to-brain interface - what will mind become? Here’s a latest paper by one of my favorite scientists - Stuart Kauffman - looking at what consciousness may be. A must read for anyone interested in consciousness studies.
Beyond the Stalemate: Conscious MInd-Body - Quantum Mechanics - Free Will - Possible Panpsychism - Possible Interpretation of Quantum Enigma 
http://arxiv.org/ftp/arxiv/papers/1410/1410.2127.pdf
Introduction
I wish to discuss a large, interwoven set of topics pointed at in the title above. Much of what I say is highly speculative, some is testable, some is, at present, surely not. It is, I hope, useful, to set these ideas forth for our consideration. What I shall say assumes quantum measurement is real, and that Bohm's interpretation of Quantum Mechanics is not true.

The Stalemate: In our contemporary neurobiology and much of the philosophy of mind post Descartes we are classical machines and either mindless, or mind is at best epiphenomenal and can have o consequences of the physical world. The first main point of this paper is that we are not forced to this conclusion, but must give up total reliance on classical physics. 


The question of how long Moore’s Law can be maintained is a pressing one. Here is a possible answer.
Moore’s Law and Moving Beyond Silicon: The Rise of Diamond Technology
The Power to Transform Industries
Indeed, many consider that the industry is entering the Dawn of a Diamond Age of Electronics. They believe the world’s hardest-known natural material with exceptional electronic properties will take a variety of industries to the next level of performance. It is on the verge of being the accepted choice to produce today’s most advanced industrial products – and its use in consumer electronics ranks close behind.

Why diamond? It can run hotter without degrading in performance (over 5 times that of Silicon), is more easily cooled (with 22 times the heat transfer efficiency of silicon), can tolerate higher voltages before breaking down, and electrons (and electron-holes) can move faster through them. Already, semiconductor devices with diamond material are available that deliver one million times more electrical current than silicon or previous attempts using diamond.

Diamond-based semiconductors are capable of increasing power density as well as create faster, lighter, and simpler devices. They’re more environmentally friendly than silicon and improve thermal performance within a device. As a result, the diamond materials market for semiconductors can easily eclipse that of the Silicon Carbide, which is seen growing at a 42.03 percent compound annual rate through 2020 from $3.3 Billion in 2014, due to performance, cost, and direct integration with the existing silicon platform.


Speaking of Moore’s Law - here’s something coming to a neighbor near you - maybe to your kids? New types of team games? ….
A Smartphone Is the Brain for This Autonomous Quadcopter
the only thing that the quadrotor has in terms of electronics is a motor controller and a battery. All of the clever stuff is being handled entirely by the phone, which is just a stock Android smartphone with a Qualcomm Snapdragon inside. In other words, this is not a special device (like Google’s Project Tango phone, which the UPenn researchers used in a demo last year); it’s something that you can pick up for yourself, and the UPenn guys only half jokingly offered to install their app on my phone and letit fly the robot.

This is a fantastic example of just how far smartphones have come: they’re certainly powerful computers, but it’s the integrated sensing that comes standard in almost all of them (things like gyros, accelerometers, IMUs, and high resolution cameras) that makes them ideal for low-cost brains for robots. What’s unique about the CES demo is that it’s the first time that a sophisticated platform like this (vision-based real-time autonomous navigation of a flying robot is pretty darn sophisticated) has been controlled by a very basic consumer device.

“What we’d like to do is make these kinds of robots smaller, smarter, and faster. When you make things smaller, the number of things you can do increases, and that’s where we hope to use lots of these guys. So think about a single flying phone that you have today; tomorrow, you’ll see a swarm of flying phones. That’s what we’re working towards.”
A swarm of flying phones?


Speaking of swarms - here’s an article from Nature - focused on the next decade. And how our systems can learn to respond to “the long tail of unlikely events”.
Autonomous vehicles: No drivers required
Automation is one of the hottest topics in transportation research and could yield completely driverless cars in less than a decade.
This summer, people will cruise through the streets of Greenwich, UK, in electric shuttles with no one's hands on the steering wheel — or any steering wheel at all.

The £8-million (US$12-million) project, part of a larger study of driverless cars funded by the UK government, is just one of many efforts that seek to revolutionize transportation. Spurred in part by a desire to end the carnage from road accidents — about 90% of which are caused by driver error — the race is on to transfer control from people to computers that never doze at the wheel, get distracted by text messages or down too many pints at the pub.

Almost every major car maker is working on some form of automation, as are many electronics companies. But looming over everyone is the Internet giant Google: the company has been widely acknowledged as the world leader in driverless-car research since October 2010, when it announced that it had entered the field a year earlier — and that its driverless test vehicles had already logged more than 200,000 kilometres on roads near its headquarters in Mountain View, California, and elsewhere in the state. The public's enthusiastic response to that revelation galvanized car makers and government research-funding agencies around the world to accelerate their efforts in this arena.

“I've never seen anything move so quickly from concept into products,” says Richard Bishop, an automotive consultant who headed a US Department of Transportation research programme on automated motorways in the 1990s. Although many technical challenges remain, developers say they can see clear paths for solving most or all of them.


Now what will a swarm of autonomous cars be capable of? New forms of stigmurgy? New forms of pheromone clouds? This is a very entertaining articles that presents a very accessible, easy to understand ideas about emergence and also how the Internet Transmission Control Protocol (TCP) works - there’s also a TCP joke in the article. Must Read.
The Independent Discovery of TCP/IP, By Ants
What does the internet have in common with an ant colony? More than you might think.
It doesn’t have much in common with the common ant, of course. As the anteater in Douglas Hofstadter’s Gödel Escher Bach puts it, “Just as you would never confuse an individual tree with a forest, so here you must not take an ant for the colony.”

It’s a fanciful chapter: the anteater claims to be a close friend of a “witty” ant hill named “Aunt Hillary”, despite being feared by her comparatively simple-minded component ants. But Hofstadter uses Aunt Hillary and her ants as a metaphor for human minds and their neurons, and computer programs and their ones and zeros.

In the decades since Gödel Escher Bach was published, this computation metaphor has proven uncannily useful. Some ant colonies seem be capable of achieving great cunning through their little insects. Studying ant colonies has helped engineers build better computer programs -- like the “ant colony” algorithm used to optimize route planning, and recently a computer scientist and biologist discovered the shadow of protocols that make up the Internet in the foraging patterns of certain ant colonies.

To honor their independent discovery, Balaji Prabhakar, a professor of electrical engineering and computer science, called the ant’s algorithm the “anternet.” He remarked, "Ants have discovered an algorithm that we know well, and they've been doing it for millions of years.”


And as strange as it may sound this next article is related - what we are learning about the technologies of living systems will help us extend our own living systems.
How did multicellular life evolve?
Scientists are discovering ways in which single cells might have evolved traits that entrenched them into group behavior, paving the way for multicellular life. These discoveries could shed light on how complex extraterrestrial life might evolve on alien worlds.

Researchers detailed these findings in the Oct. 24 issue of the journal Science.
The first known single-celled organisms appeared on Earth about 3.5 billion years ago, roughly a billion years after Earth formed. More complex forms of life took longer to evolve, with the first multicellular animals not appearing until about 600 million years ago.

The evolution of multicellular life from simpler, unicellular microbes was a pivotal moment in the history of biology on Earth and has drastically reshaped the planet's ecology. However, one mystery about multicellular organisms is why cells did not return back to single-celled life.

"Unicellularity is clearly successful—unicellular organisms are much more abundant than multicellular organisms, and have been around for at least an additional 2 billion years," said lead study author Eric Libby, a mathematical biologist at the Santa Fe Institute in New Mexico. "So what is the advantage to being multicellular and staying that way?"


A 23 min presentation on the evolution of Bitcoin 2.0. This is a nice summary of some the current challenges and future aims of the community of global ‘hackers’ aim to make progress in this domain. Also as strange as it may sound - the ‘blockchain’ protocol may be the next pheromone cloud.
VITALIK BUTERIN FOUNDER ETHEREUM - Bitcoin 2.0 - Ideas and Applications
TNABC 2015 - VITALIK BUTERIN FOUNDER ETHEREUM - Bitcoin 2.0 - Ideas and Applications
Bitcoinist.net Presents in Association with TNABC


Speaking about an imagining of the future here’s a wonderful 1hr video with William Gibson.
William Gibson: Technology, Science Fiction & the Apocalypse
On the occasion of the 30th anniversary of his landmark novel "Neuromancer," CHF favorite William Gibson returns to the Festival. This autumn he’ll celebrate the publication of his latest work, "The Peripheral," a high-tech thriller set partly in a decadent post-apocalyptic future. Gibson is joined in conversation by author Carol Anshaw.


Speaking of authors and futurists - here’s a 6 min video with someone we should all recognize. Here he talks with an assumption of ubiquitous mobile phones. One key issue he mentions at the beginning is that in 1976 we have lots of information ‘coming in’ but only the phone was enabling us to send ‘information out’. This is interesting - as the Incumbents providing us with Internet service are very happy with systems that increase ‘download’ speed (information coming in) but have done nothing in the last couple of decades to increase our ‘upload’ speed. The differential between download and upload speed is usually close to 10 times faster download than upload. Sort of like building an infrastructure that only want consumers and not producers.
Listening to Clarke - what is still keeping us from achieving this vision??? - Although, the wristwatch telephone is finally here.
Interview with author/futurist Arthur C. Clarke, from an AT&T-MIT Conference, 1976
Arthur C. Clarke, science fiction author and futurist, crossed paths with the scientists of the Bell System on numerous occasions. In 1945, he concurrently, but independently, conceived of the first concept for a communications satellite at the same time as Bell Labs scientist, John Robinson Pierce. Pierce too, was a science fiction writer. To avoid any conflict with his day job at Bell Labs, Pierce published his stories under the pseudonym J.J. Coupling.

In the early 1960s, Clarke visited Pierce at Bell Labs. During his visit, Clarke saw and heard the voice synthesis experiments going on at the labs by John L. Kelly and Max Mathews, including Mathews’ computer vocal version of “Bicycle Built for Two”. Clarke later incorporated this singing computer into the climactic scene in the screenplay for the movie 2001: A Space Odyssey, where the computer HAL9000 sings the same song. According to Bob Lucky, another Bell Labs scientist, on the same visit, Clarke also saw an early Picturephone, and incorporated that into 2001 as well.

In 1976, AT&T and MIT held a conference on futurism and technology, attended by scientists, theorists, academics and futurists. This interview with Clarke during this conference is remarkably prescient—especially about the evolution of communications systems for the next 30+ years.

The interview was conducted for an episode of a Bell System newsmagazine, but this is the raw interview footage.


Speaking about apocalypse - do you remember when…. but may you never did that. Maybe the past is just as uncertain as the future?
People can be convinced they committed a crime that never happened
Innocent adult participants can be convinced, over the course of a few hours, that they had perpetrated crimes as serious as assault with a weapon in their teenage years. This research indicates that the participants came to internalize the stories they were told, providing rich and detailed descriptions of events that never actually took place. fictitious


Here’s a wonderful piece by a great Canadian writer and creative commons activist Cory Doctorow.
Writer Naomi Novik explains copyright to Congress
Naomi Novik isn't just a talented author (she won the John W Campbell Award for best new writer in 2007 on the strength of her fabulous Temeraire novels, which retell the Napoleonic wars with dragons providing air-support!), she's also a profound thinker on the questions of reuse, remixing, intellectual freedom and copyright.

Last week she gave testimony to the House Judiciary Committee's Subcommittee on Courts, Intellectual Property and the Internet that described the way that creators rely on their ability to remix in order to create new and original works.

One thing I love about Novik is her intellectual honesty and her willingness to cut through the self-serving, romantic mythology of the wholly original creator, and to both acknowledge and celebrate the fact that her originality comes about by taking the works that others created before her and adapting them through her own artistic process, "Original work, work that stands alone, doesn't just pop up out of nowhere. It is at the end of a natural spectrum of transformation."


All in a light day’s work - this is fascinating hacking light by Ottawa researchers
A Möbius Strip Made of Light
Most nerdy kids, of the type that would grow up to read IEEE Spectrum, were excited when they first learned about the Möbius strip, a three-dimensional shape with only one surface. Many immediately fashioned their own, by cutting a thin strip of paper, twisting it, and joining one end to the other to make a continuous surface. Now scientists have figured out how to make a Möbius strip out of light.

The researchers, from Canada, Europe, and the United States, were able to twist the polarization of a light beam in order to form a 3-D structure out of an electric field that had the same topology as a Möbius strip made of matter would. Lightwaves consist of both an electric and a magnetic field, which oscillate perpendicular to each other. Polarization refers to the direction in which the electric field oscillates.

Under normal circumstances the polarization does not change as the lightwave moves through space. But  the team managed to change those circumstances, says Ebrahim Karimi, a postdoctoral fellow in Robert Boyd’s Quantum Photonics group at the University of Ottawa, and one of the authors of the paper explaining the results in this week’s Science Express. The device that does the job, which they call a “q plate,” consists of a series of liquid crystals. Instead of all being lined up in the same direction, the individual liquid crystals are set at various angles to each other, forming a complex pattern through which the light can pass. Shooting a laser through the q plate creates an interference pattern that twists the polarization along the path of the light beam.


Speaking of light and resolution - here something that will be coming to displays near us soon.
High-Resolution Printing of Quantum Dots For Vibrant, Inexpensive Displays
Using a technique much like inkjet printing, engineers have created high-resolution patterns of quantum dots. Quantum dots (QDs) are light-emitting semiconductor nanocrystals that, used in light-emitting diodes (LEDs), hold the promise of brighter, faster displays. But there is no reliable and efficient way to pattern them at a high resolution to create multicolor pixels for displays.

John Rogers, a materials science and engineering professor at the University of Illinois in Urbana-Champaign, and his colleagues are repurposing a printing method they devised for other applications. When used with “QD ink,” it can create lines and spots that are just 0.25 micrometers wide. They made arrays and complex patterns of QDs in multiple colors, and could even print QDs on top of others of a different color. They sandwiched these patterns between electrodes to make bright QD LEDs. Details about the results were published in the journal Nano Letters.

Quantum dot TVs were big at the 2015 Consumer Electronics Show (CES) in Las Vegas. Companies such as Sony, Samsung and LG all have their own version. But the TVs demoed at CES use QDs along with blue inorganic LEDs to create a white backlight. The white light is beamed through color filters at each pixel to generate any color. The quantum dots are simply filled in a tube or painted on the entire backpanel; they don’t need to be patterned.


So by now everyone should have heard of virtual reality gaming - here’s not elders but youngsters playing a scary virtual reality game - 13 min - definitely worth the view.
For Fun
OCULUS RIFT - AFFECTED: THE MANOR (Teens React: Gaming)


Now with all the talk of Drones - here’s one possibility in a 4 min video.
TU Delft - Ambulance Drone
Each year nearly a million people in Europe suffer from a cardiac arrest. A mere 8% survives due to slow response times of emergency services. The ambulance-drone is capable of saving lives with an integrated defibrillator. The goal is to improve existing emergency infrastructure with a network of drones. This new type of drones can go over 100 km/h and reaches its destination within 1 minute, which increases chance of survival from 8% to 80%! This drone folds up and becomes a toolbox for all kind of emergency supplies. Future implementations will also serve other use cases such as drowning, diabetes, respiratory issues and traumas.


On the energy front here’s a Scientific American article that’s a short wrap-up of last year - and as it suggest 2015 - maybe a significant threshold.
Renewable Energy Shines in 2014 http://blogs.scientificamerican.com/plugged-in/2015/02/05/renewable-energy-shines-in-2014/
Looking back at 2014 through the prism of renewable energy, it’s hard not to get bombastic. So many records were broken, corners turned, and with costs declining, it’s hard not to wonder if 2015 will see renewable energy become nothing more than a fully competitive energy source, capturing more and more market share. But first, let’s take a brief tour to see what happened in 2014.

….For the sake of comparison, if we assume an average nuclear plant capacity at 5,000 MW, then 2014 installed wind in Germany, U.S., and China, along with solar in UK and Latin America, together equal almost seven large nuclear plants. Not bad for a year’s work.


Here’s an interesting article from the National Science Foundation - on AI and self-driving vehicles - I wonder how many drivers our military and other transportation industries will need in the next 20 years?
Programming safety into self-driving cars
UMass researchers improve artificial intelligence algorithms for semi-autonomous vehicles
For decades, researchers in artificial intelligence, or AI, worked on specialized problems, developing theoretical concepts and workable algorithms for various aspects of the field. Computer vision, planning and reasoning experts all struggled independently in areas that many thought would be easy to solve, but which proved incredibly difficult.

However, in recent years, as the individual aspects of artificial intelligence matured, researchers began bringing the pieces together, leading to amazing displays of high-level intelligence: from IBM's Watsonto the recent poker playing champion to the ability of AI to recognize cats on the internet.

These advances were on display this week at the 29th conference of the Association for the Advancement of Artificial Intelligence (AAAI) in Austin, Texas, where interdisciplinary and applied research were prevalent, according to Shlomo Zilberstein, the conference committee chair and co-author on three papers at the conference.

Zilberstein studies the way artificial agents plan their future actions, particularly when working semi-autonomously--that is to say in conjunction with people or other devices.
Examples of semi-autonomous systems include co-robots working with humans in manufacturing, search-and-rescue robots that can be managed by humans working  remotely and "driverless" cars. It is the latter topic that has particularly piqued Zilberstein's interest in recent years.


Here’s something for everyone who’s just a ‘little’ interested in what dark matter is - a short article about a recent finding - but a 30second video explanation - that’s fun and worth the watch.
Dark Matter Hunters Suspect They've Found 'Galaxy X'
Years ago, astronomers mapped out curious ripples in the cold hydrogen gas that lies within the disk of our Milky Way galaxy — and suggested that the ripples were caused by the gravitational influence of an unseen dwarf galaxy dominated by dark matter. Now those astronomers say they may have found the lurker, nicknamed "Galaxy X."

The "observational confirmation" of the prediction is detailed in a research paper that's been accepted for publication in Astrophysical Journal Letters, astronomer Sukanya Chakrabarti of the Rochester Institute of Technology told NBC News in an email.
Chakrabarti and her colleagues analyzed near-infrared data collected by the European Southern Observatory's VISTA telescope to find four young stars clustered in the constellation Norma. The stars are Cepheid variables, which can be used as yardsticks for measuring astronomical distances. The research team determined that the stars are 300,000 light-years away, well beyond the edge of the Milky Way's disk.

"The discovery of the Cepheid variables shows that our method of finding the location of dark-matter dominated dwarf galaxies works," she said. "It may help us ultimately understand what dark matter is made up of. It also shows that Newton's theory of gravity can be used out to the farthest reaches of a galaxy, and that there is no need to modify our theory of gravity."


Last article - this is a great interactive map of the most common jobs from 1978 to 2014. Quite amazing how secretaries were overwhelmingly common everywhere until 1988 (peak) and almost gone by 1994 - and completely displaced by 2002.
Map: The Most Common* Job In Every State
We used data from the Census Bureau, which has two catch-all categories: "managers not elsewhere classified" and "salespersons not elsewhere classified." Because those categories are broad and vague to the point of meaninglessness, we excluded them from our map.


Here’s a great TED Talk about complexity and the question of how to make Toast. Fun, entertaining and deeply informing about how to think about describing models of problems. Worth the view.
Tom Wujec: Got a wicked problem? First, tell me how you make toast
Making toast doesn’t sound very complicated — until someone asks you to draw the process, step by step. Tom Wujec loves asking people and teams to draw how they make toast, because the process reveals unexpected truths about how we can solve our biggest, most complicated problems at work. Learn how to run this exercise yourself, and hear Wujec’s surprising insights from watching thousands of people draw toast.
Here’s the author’s own website.
An Introduction to Systems Thinking and Wicked Problem Solving
http://www.drawtoast.com/

No comments:

Post a Comment