Thursday, June 25, 2015

Friday Thinking, 26 June 2015

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

The future ain’t what it used to be - and tomorrow it won’t be what it is today

In complex environments, the way to proficiency is to recombine successful elements to create new versions, some of which may thrive.

As a result, not just the user interfaces, but the operating system of work is starting to change in a radical way. The traditional industrial approach to work was to require each worker to assume a predetermined responsibility for a specific role. The new approach represents a different logic of organizing based on neither the traditional market nor a process. Whereas processes involve relations based on dependence and markets involve relations based on independence, the new networks involve relations of dynamic interdependence.

Minimal hierarchy, organizational diversity and responsiveness characterize these architectures. They are a necessary response to the increasing fuzziness of strategic horizons and short half-life of designs. Because of greater complexity, coordination cannot be planned in advance. Authority needs to be distributed; it is no longer delegated vertically but emerges horizontally. Under distributed authority work teams and knowledge workers need to be accountable to other work teams and other knowledge workers. Achievement depends on learning by mutual accountability and responsiveness.

Management and strategy used to be about rational choice between a set of known options and variables. The variables of creative work and complex environments have increased beyond systems thinking and process design. Under circumstances of rapid technological change, the management challenge is to create openness to possibilities and plausible options.

Success is based on continuous redefinition of the organization itself. It is about recombining options and contributions in a competitive and cooperative environment. Creativity is the default state of all human work. Even the most creative people are more remixers of other people’s ideas than lone inventors. Technology and development in general are not isolated acts by independent thinkers, but a complex storyline.

The democratization of technology that is taking place at the moment does not determine social and organizational change, but does create new opportunity spaces for new social practices. The opportunity we have is in new relational forms that don’t mimic the governance models of industrial firms. Network theory suggests that what the system becomes emerges from the complex, responsive relationships of its members, continuously developing in communication.
ESKO KILPI ON INTERACTIVE VALUE CREATION
Collaborative and Competitive Creativity


Bureaucracies public and private appear—for whatever historical reasons—to be organized in such a way as to guarantee that a significant proportion of actors will not be able to perform their tasks as expected. It also exemplifies what I have come to think of as the defining feature of certain utopian forms of practice: that is, ones where those maintaining the system, on discovering that it will regularly produce such failures, conclude that the problem is not with the system itself but with the inadequacy of the human beings involved—or, indeed, of human beings in general.

This essay is not, however, primarily about bureaucracy—or even about the reasons for its neglect in anthropology and related disciplines. It is really about violence. What I would like to argue is that situations created by violence—particularly structural violence, by which I mean forms of pervasive social inequality that are ultimately backed up by the threat of physical harm—invariably tend to create the kinds of willful blindness we normally associate with bureaucratic procedures. To put it crudely: it is not so much that bureaucratic procedures are inherently stupid, or even that they tend to produce behavior that they themselves define as stupid, but rather, that they are invariably ways of managing social situations that are already stupid because they are founded on structural violence. I think this approach allows potential insights into matters that are, in fact, both interesting and important: for instance, the actual relationship between those forms of simplification typical of social theory, and those typical of administrative procedures.
David Graeber - Dead zones of the imagination: On violence, bureaucracy, and interpretive labor


...medicine is entering a new phase in which cells will become living drugs. It is a third pillar of medicine. The pharmaceuticals that arose from synthetic chemistry made up the first pillar. Then, after Genentech produced insulin in a bacterium in 1978, came the revolution of protein drugs. Now companies like Juno are hoping to use our own cells as the treatment. In the case of T cells, the tantalizing evidence is that some cancers could be treated with few side effects other than a powerful fever.
Biotech’s Coming Cancer Cure


The looming change in Geopolitics - has a number of dimensions and will very plausibly be driven by innovation arising from continued massive urbanization, ubiquitous -near free energy and new forms of self-governance.
Asia-Pacific is wealthier than Europe
LAST year, global private wealth grew by 12%, or $17.5 trillion, to reach $164 trillion (in stocks, bonds, savings and cash) according to a report released this week by BCG, a consultancy. Good news for many, but particularly for Asia, where private wealth grew by a whopping 29% compared to 5.6% in North America and 6.6% in Europe.

For the first time in modern history, Asia is now richer than Europe. And it is catching up with North America too; by 2019 the region’s wealth is expected to reach $75 trillion compared to $63 trillion in North America. And although America is still the country with by far the most millionaires in the world, of the 2m new millionaire households created last year 62% are from Asia-Pacific. China is the main driver here; it will account for 70% of Asia’s growth between now and 2019, predicts BCG, and by 2021 it will overtake America as the world’s wealthiest nation.

At the same time, the world’s wealth is being concentrated in fewer pockets. Whereas in 2012 38% was held by millionaires, in 2014 this was 42% and the trend is increasing. Whereas households with more than a million dollars in the bank saw their wealth swell by 16% on average, those with less wealth saw it grow by only 9%.

But it wasn’t just the performance of existing assets that drove growth; new wealth was created as well, particularly outside the rich world. Much of Asia’s newly created wealth came from entrepreneurs of mid-sized companies, says Daniel Kessler, one of the authors of the report. Of the $4.7 trillion in new wealth created last year, $3.2 billion came from outside the rich world, particularly the Asian powerhouses.


Here’s a strong signal about the tipping point for solar - it may well be that 2015 will be regarded as the point where the phase change in energy began.
India Just Upped Its Solar Target Five-Fold, Will Install More Solar This Year Than Germany
On Wednesday, Prime Minister Narendra Modi and the Indian Cabinet approved increasing the country’s solar target five times to a goal of reaching 100 gigawatts, up from 20 GW, by 2022.

The new solar capacity will be nearly split between residential and large-scale solar projects, with some 40 GW expected to be generated from rooftop installations and the remaining 60 GW coming from larger, grid-connected projects, such as solar farms.

“With this ambitious target, India will become one of the largest green energy producers in the world, surpassing several developed countries,” reads the announcement. “Solar power can contribute to the long term energy security of India, and reduce dependence on fossil fuels that put a strain on foreign reserves and the ecology as well.”

The announcement ups the stakes significantly for the Jawaharlal Nehru National Solar Mission, launched in 2010 by Prime Minister Manmohan Singh, which aims to help the country achieve success with solar energy deployment.


For those that are still skeptical about the phase transition that we are emerging into regarding energy. Here’s a new advance - still under development - but think where we will be in another decade?
Chemists devise technology that could transform solar energy storage
The materials in most of today's residential rooftop solar panels can store energy from the sun for only a few microseconds at a time. A new technology developed by chemists at UCLA is capable of storing solar energy for up to several weeks—an advance that could change the way scientists think about designing solar cells.

The findings are published June 19 in the journal Science.
The new design is inspired by the way that plants generate energy through photosynthesis.

"Biology does a very good job of creating energy from sunlight," said Sarah Tolbert, a UCLA professor of chemistry and one of the senior authors of the research. "Plants do this through photosynthesis with extremely high efficiency."

"In photosynthesis, plants that are exposed to sunlight use carefully organized nanoscale structures within their cells to rapidly separate charges—pulling electrons away from the positively charged molecule that is left behind, and keeping positive and negative charges separated," Tolbert said. "That separation is the key to making the process so efficient."


The Internet and technology wars may just be beginning. Our choices may be one of choosing open-source versus proprietary walls. The implications of ‘who wins’ are significant. This is a must read.
Google on Apple: The end is near
I love using Google services on Apple hardware, but now I fear those days are numbered
The chat room and social network religious wars between Apple and Google demand that you take sides. But I've always felt that the best experience includes a cherry-picking of Apple hardware, Google services and apps from both.

For example, on my MacBook Pro, iPad and iPhone, my No. 1 application is Google Chrome, where I obsessively use Google Search, Google Photos, Google Drive, Google Docs, Google Sheets, Google News, Google Maps, Google+, Inbox and more.

Yes, I use non-Google services, but eight out of the top 10 sites I use are Google services.
After the announcements at Apple WWDC last week, and at Google I/O last month, it's clear that the days of using Google services on Apple hardware are numbered.

Soon we'll be forced to choose between all-Google or all-Apple.


If we are going to speak of platform wars we can’t leave Facebook out.
Inside a counterfeit Facebook farm
The curtains are drawn, and the artificial moonlight of computer screens illuminates the room. Eight workers sit in two rows, their tools arranged on their desks: a computer, a minaret of cellphone SIM cards, and an old cellphone. Tens of thousands of additional SIM cards are taped into bricks and stored under chairs, on top of computers, and in old instant-noodle boxes around the room.

[The] boss, sits at a desk positioned behind his employees, occasionally glancing up from his double monitor to survey their screens. Even in the gloom, he wears Ray-Ban sunglasses to shield his eyes from the glare of his computer. ("Richard Braggs" is the alias he uses for business purposes.)

Casipong inserts earbuds, queues up dance music, and checks her clients' instructions. Their specifications are often quite pointed. A São Paulo gym might request 75 female Brazilian fitness fanatics, or a bar in San Francisco's Castro district might want 1,000 local gay men. Her current order is the most common: fake Facebook profiles of beautiful American women between the ages of 20 and 30. Once a client has received the accounts, he will probably use them to sell Facebook likes to customers looking for an illicit social media boost.

Most of the accounts Casipong creates are sold to these digital middlemen — "click farms" as they have come to be known. Just as fast as Silicon Valley conjures something valuable from digital ephemera, click farms seek ways to create counterfeits. Just Google "buy Facebook likes" and you'll see how easy it is to purchase black-market influence on the internet: 1,000 Facebook likes for $29.99; 1,000 Twitter followers for $12; or any other type of fake social media credential, from YouTube views to Pinterest followers to SoundCloud plays. Social media is now the engine of the internet, and that engine is running on some pretty suspect fuel.

Casipong plays her role in hijacking the currencies of social media — Facebook likes, Twitter followers — by performing the same routine over and over again. She starts by entering the client's specifications into the website Fake Name Generator, which returns a sociologically realistic identity: Ashley Nivens, 21, from Nashville, Tennessee, now a student at New York University who works part-time at American Apparel. Casipong then creates an email account. The email address forms the foundation of Ashley Nivens' Facebook account, which is fleshed out with a profile picture from photos that Braggs' workers have scraped from dating sites. The whole time, a proxy server makes it seem as though Casipong is accessing the internet from Manhattan, and software disables the cookies that Facebook uses to track suspicious activity.

Next, Casipong inserts a SIM card into a Nokia cellphone, a pre–touch screen antique that's been used so much, the digits on its keypad have worn away. Once the phone is live, she types its number into Nivens' Facebook profile and waits for a verification code to arrive via text message. She enters the code into Facebook and — voilà! — Ashley Nivens is, according to Facebook's security algorithms, a real person. The whole process takes about three minutes.


So let’s not leave Microsoft out of this. Here’s an article on their Privacy Policy and Service Agreement - this is worth the read - both for the shocking degree that platform grant themselves rights - but also for the insight into the trajectory that MS is taking as it moves to catch up with the others in Cloud-Land. :)
Microsoft’s new small print – how your personal data is (ab)used
Microsoft has renewed its Privacy Policy and Service Agreement. The new services agreement goes into effect on 1 August 2015, only a couple of days after the launch of the Windows 10 operating system on 29 July.

The new “privacy dashboard” is presented to give the users a possibility to control their data related to various products in a centralised manner. Microsoft’s deputy general counsel, Horacio Gutierrez, wrote in a blog post that Microsoft believes “that real transparency starts with straightforward terms and policies that people can clearly understand”. We copied and pasted the Microsoft Privacy Statement and the Services Agreement into a document editor and found that these “straightforward” terms are 22 and 23 pages long respectively. Summing up these 45 pages, one can say that Microsoft basically grants itself very broad rights to collect everything you do, say and write with and on your devices in order to sell more targeted advertising or to sell your data to third parties. The company appears to be granting itself the right to share your data either with your consent “or as necessary”.

A French tech news website Numerama analysed the new privacy policy and found a number of conditions users should be aware of:


Given the many issues with social media platforms and with social media itself - here’s a very interesting initiative to help everyone become a better citizen journalist - as well as help us all wayfind through the exponential increase in information. There are some great links in the article that will be of interest for anyone interested in verification of content on the Internet.
Introducing the First Draft Coalition
In those crucial moments in the aftermath of a breaking news event, like the Charlie Hebdo attacks in Paris or the earthquake in Nepal, eyewitness documentation can be essential to piecing together what happened. The revolution in user-generated news content presents a great opportunity to expand our view of the world, but it also raises entirely new challenges for the news industry to grapple with. How and where do you find these images? How can you verify that they are authentic and genuine? Do you have the right to use them? And what ethical responsibilities should you consider before publishing?

Launching today, the First Draft Coalition is a group of thought leaders and pioneers in social media journalism who are coming together to help you answer these questions, through training and analysis of eyewitness media.

Our founding members, from different organisations and projects, are each dedicated to raising awareness and improving standards around the use of content sourced from the social web. They are Bellingcat, Eyewitness Media Hub, Emergent, Meedan, Reported.ly, Storyful and Verification Junkie. Our aim is to open up the conversation around the use of eyewitness media in news reporting with a strong focus on ethics, verification, copyright and protection, and we want to reach and hear from everyone in the journalism community, including students, lecturers, local reporters and international editors.

With the support of Google News Lab, we’re getting straight to work on developing a new destination website that will feature essential training materials, plus a database of case studies, resources and tools. The site will publish regular articles, interviews and reviews covering all aspects of handling eyewitness media, including the most effective techniques for discovery and verification alongside ethical and legal guidance for publishing and broadcasting.


Suppose social platforms were more like a commons - public infrastructure - what could we do on such platforms?
NEW YORK CITY TESTS DIGITAL BALLOT IN PARTICIPATORY BUDGET VOTE
In New York City's fourth year of participatory budgeting, five city council districts pilot a digital ballot and experimental voting interfaces designed to make "the best decision possible."
Participatory budgeting is the practice, originating in Brazil, of letting community residents decide how municipal funds should be allocated in their neighborhood. In New York City, it is an almost year-long process, beginning with the development of proposals in the fall, followed by a consultation with city agencies on budget and feasibility in the winter, and finally voting on projects in the spring. The participatory budgeting vote is a far more inclusive process than local elections: Immigration status is of no import, as long as the person is a resident of the council district in which they vote, and, depending on the district, residents as young as 14 or 16 can participate. This year, New York's fourth, more than 51,000 people voted on how to spend $32 million dollars on capital projects.

“The level of engagement and enthusiasm in this year’s Participatory Budgeting process was unprecedented and deeply democratic,” Speaker Melissa Mark-Viverito said in a public statement. “Across the city, thousands of residents of all ages and backgrounds came together to make their neighborhoods a better place to call home. Participatory Budgeting breaks down barriers that New Yorkers may face at the polls—including youth, income status, English-language proficiency and citizenship status—resulting in a civic dialogue that is truly inclusive and representative of the diversity of this community and this city.”


This is a 20 page paper by David Graeber of ‘Debt the First 5,000 Years’ fame. In addition this is also a great Anthropology Journal with open access to all their articles.
Dead zones of the imagination: On violence, bureaucracy, and interpretive labor
Abstract
The experience of bureaucratic incompetence, confusion, and its ability to cause otherwise intelligent people to behave outright foolishly, opens up a series of questions about the nature of power or, more specifically, structural violence. The unique qualities of violence as a form of action means that human relations ultimately founded on violence create lopsided structures of the imagination, where the responsibility to do the interpretive labor required to allow the powerful to operate oblivious to much of what is going on around them, falls on the powerless, who thus tend to empathize with the powerful far more than the powerful do with them. The bureaucratic imposition of simple categorical schemes on the world is a way of managing the fundamental stupidity of such situations. In the hands of social theorists, such simplified schemas can be sources of insight; when enforced through structures of coercion, they tend to have precisely the opposite effect.


This is a must view 14 min video by a world renown physicist talking about his experience of teaching physics successfully - by that I mean inciting learning - rather than being the ‘Sage on the Stage’.
Peer Instruction for Active Learning - Eric Mazur
Harvard University Prof. Eric Mazur on difficulties of beginners, teaching each other, and making sense of information - an approach that can be applied to any domain that involves critical thinking.


Continuing the lessons learned that Mazur discusses this is an interesting article on research related to learning and the MOOC.
Why there are so many video lectures in online learning, and why there probably shouldn’t be
The Internet promised us a revolution in learning. What we ended up with looks more like educational TV on small screens. Why is that? And how might we change it?
Over the past few years, a big trend in online learning has been to move lots of content and learning materials online in the form of Massive Open Online Courses (MOOCs). While these courses cover a wide range of subjects and exist on a number of different platforms, one thing nearly all MOOCs have in common is a focus on delivering content to the learner through video. The majority of these videos look like traditional lectures chopped up into smaller chunks, in the style of a “talking head” (lecturer talks to the class) or “tablet capture” (lecturer writes on the blackboard while talking).

The choice of video is not obvious. MOOC videos are not cheap to produce. Routinely, video is the single most expensive item in a MOOC’s budget, in both time and money. And despite the relatively high cost of video production, there is scant research into the effectiveness of video as a pedagogical tool for MOOCs. What little research does exist is focused on engagement metrics (e.g., analysis of clickstream data and viewing statistics), which may or may not serve as an effective proxy for measuring learning. So why, then, are MOOCs so deeply invested in video?

Wanting to learn more, the MIT Media Lab’s Learning Over Education initiative began collaborating with researchers at The Alexander von Humboldt Institute for Internet and Society (HIIG) in August 2014. We reviewed the available literature, interviewed experts in the field, and studied how video was being used in over 20 MOOCs. The full report,Video and Online Learning: Critical Reflections and Findings From the Field, is available through the Social Science Research Network (SSRN). Some of our more interesting findings and three recommendations for the field are below.
Here’s the link to the Video and Online Learning SSRN report:


We all know that IQ tests measure what IQ tests measure - still this is an interesting development.
Deep Learning Machine Beats Humans in IQ Test
Computers have never been good at answering the type of verbal reasoning questions found in IQ tests. Now a deep learning machine unveiled in China is changing that.
Just over 100 years ago, the German psychologist William Stern introduced the intelligence quotient test as a way of evaluating human intelligence. Since then, IQ tests have become a standard feature of modern life and are used to determine children’s suitability for schools and adults’ ability to perform jobs.

These tests usually contain three categories of questions: logic questions such as patterns in sequences of images, mathematical questions such as finding patterns in sequences of numbers and verbal reasoning questions, which are based around analogies, classifications, as well as synonyms and antonyms.

It is this last category that has interested Huazheng Wang and pals at the University of Science and Technology of China and Bin Gao and buddies at Microsoft Research in Beijing. Computers have never been good at these. Pose a verbal reasoning question to a natural language processing machine and its performance will be poor, much worse than the average human ability.

Today, that changes thanks to Huazheng and pals who have built a deep learning machine that outperforms the average human ability to answer verbal reasoning questions for the first time.

Human performance on these tests tends to correlate with educational background. So people with a high school education tend to do least well, while those with a bachelor’s degree do better and those with a doctorate perform best. “Our model can reach the intelligence level between the people with the bachelor degrees and those with the master degrees,” say Huazheng and co.


And another article about progress in the domain of Big Data and Deep Learning.
Google DeepMind Teaches Artificial Intelligence Machines to Read
The best way for AI machines to learn is by feeding them huge data sets of annotated examples, and the Daily Mail has unwittingly created one.
A revolution in artificial intelligence is currently sweeping through computer science. The technique is called deep learning and it’s affecting everything from facial and voice to fashion and economics.

But one area that has not yet benefitted is natural language processing—the ability to read a document and then answer questions about it. That’s partly because deep learning machines must first learn their trade from vast databases that are carefully annotated for the purpose. However, these simply do not exist in sufficient size to be useful.

Today, that changes thanks to the work of Karl Moritz Hermann at Google DeepMind in London and a few pals. These guys say the special way that the Daily Mail and CNN write online news articles allows them to be used in this way. And the sheer volume of articles available online creates for the first time, a database that computers can use to learn and then answer related about. In other words, DeepMind is using Daily Mail and CNN articles to teach computers to read.

The deep learning revolution has come about largely because of two breakthroughs. The first is related to neural networks, where computer scientists have developed new techniques to train networks with many layers, a task that has been tricky because of the number of parameters that must be fine-tuned. The new techniques essentially produce “ready-made” nets that are ready to learn.

The results clearly show how powerful neural nets have become. Hermann and co say the best neural nets can answer 60 percent of the queries put to them. They suggest that these machines can answer all queries that are structured in a simple way and struggle only with queries that have more complex grammatical structures.


This article is a fascinating view into how Google’s Image Recognition systems works - the pictures are a must see. Can artificial intelligence create art? Don’t know - but if I were an artist - this is a whole new domain for creative exploration.
Yes, androids do dream of electric sheep
Google sets up feedback loop in its image recognition neural network - which looks for patterns in pictures - creating hallucinatory images of animals, buildings and landscapes which veer from beautiful to terrifying
What do machines dream of? New images released by Google give us one potential answer: hypnotic landscapes of buildings, fountains and bridges merging into one.

The pictures, which veer from beautiful to terrifying, were created by the company’s image recognition neural network, which has been “taught” to identify features such as buildings, animals and objects in photographs.

They were created by feeding a picture into the network, asking it to recognise a feature of it, and modify the picture to emphasise the feature it recognises. That modified picture is then fed back into the network, which is again tasked to recognise features and emphasise them, and so on. Eventually, the feedback loop modifies the picture beyond all recognition.

At a low level, the neural network might be tasked merely to detect the edges on an image. In that case, the picture becomes painterly, an effect that will be instantly familiar to anyone who has experience playing about with photoshop filters:
Here is the link to Google’s own site with more fascinating pictures


Talking about deep learning - here’s something that suggests a deeper human-computer interface - what’s beyond the mouse and gesture.
Speech Recognition from Brain Activity
Spoken Sentences Can Be Reconstructed from Activity Patterns of Human Brain Surface / ”Brain-to-Text“ Combines Knowledge from Neuroscience, Medicine, and Informatics
Speech is produced in the human cerebral cortex. Brain waves associated with speech processes can be directly recorded with electrodes located on the surface of the cortex. It has now been shown for the first time that is possible to reconstruct basic units, words, and complete sentences of continuous speech from these brain waves and to generate the corresponding text. Researchers at KIT and Wadsworth Center, USA present their ”Brain-to-Text“ system in the scientific journal Frontiers in Neuroscience.

These results were obtained by an interdisciplinary collaboration of researchers of informatics, neuroscience, and medicine. In Karlsruhe, the methods for signal processing and automatic speech recognition have been developed and applied. ”In addition to the decoding of speech from brain activity, our models allow for a detailed analysis of the brain areas involved in speech processes and their interaction,” outline Christian Herff und Dominic Heger, who developed the Brain-to-Text system within their doctoral studies.

The present work is the first that decodes continuously spoken speech and transforms it into a textual representation. For this purpose, cortical information is combined with linguistic knowledge and machine learning algorithms to extract the most likely word sequence. Currently, Brain-to-Text is based on audible speech. However, the results are an important first step for recognizing speech from thought alone.


Here’s an interesting development in the domain of prosthetics.
Soft Robotic Glove Could Put Daily Life Within Patients’ Grasp
The latest in assistive technology is a lightweight glove that helps patients with limited mobility grab and pick up objects.
Engineers at Harvard have developed a soft robotic glove that allows people with limited hand mobility to grasp and pick up objects. The device could help the estimated 6.8 million people in the United States who have hand mobility issues, whether from a degenerative condition, stroke, or old age.

Nine patients with ALS, muscular dystrophy, incomplete spinal cord injuries, or complications from a stroke have tested the glove so far.
The goal is to restore independence for people who have lost the ability to grasp, says Conor Walsh, a professor at Harvard’s Wyss Institute for Biologically Inspired Engineering. The project was led by Panagiotis Polygerinos, a technology development fellow in Walsh’s lab. Walsh thinks that within three years the glove will be “suitable for use in the medical environment.”

For hand mobility difficulties, existing robots with hard exoskeletons can act as assistive devices and guide patients through rehabilitation exercises. But a soft robotic glove aligns more flexibly with a patient’s joints, plays nice with soft tissue like human skin, and, since it is much lighter, could eventually be taken home instead of being limited to use in a clinic.


Here’s a brilliant sophisticated and simple way to deter the spread of germs.
2 teens invented a genius way to stop the spread of bacteria on bathroom door handles
Sum Ming ("Simon") Wong, 17, and Kin Pong ("Michael") Li, 18, presented the design of their nifty door handle at the Intel International Science and Engineering Fair on May 12.

Here's how it works:
The door handle is coated with titanium dioxide, a mineral that kills bacteria and is found in paint and sunscreen. That takes care of preventing bacteria from growing on the surface of the handle, but because titanium dioxide is best at killing bacteria under ultraviolet (UV) light, that's not the end of the super-cool design.

Because door handles aren't normally bathed in UV light, Wong and Li figured out a way for every bit of their bacteria-fighting handle to get some UV rays: They lit it from inside. The handle itself is a cylinder of clear glass, with a strong light-emitting diode (LED) on one end that shines UV light through the length of the handle. Lit up with UV light, the titanium dioxide can go to work killing bacteria.


Here’s a story that should help us rethink GMOs - especially if they are the part of us that help define ‘us’ versus ‘them’. To be clear - there’s still lots of work to do - but … the promise is real.
Biotech’s Coming Cancer Cure
Supercharge your immune cells to defeat cancer? Juno Therapeutics believes its treatments can do exactly that.
...In 2013 his cancer, acute lymphoblastic leukemia, was destroyed with a new type of treatment in which cells from his immune system, called T cells, were removed from his blood, genetically engineered to target his cancer, and then dripped back into his veins. Although Wright was only the second person at Seattle Children’s to receive the treatment, earlier results in Philadelphia and New York had been close to miraculous. In 90 percent of patients with acute lymphoblastic leukemia that has returned and resists regular drugs, the cancer goes away. The chance of achieving remission in these circumstances is usually less than 10 percent.

Those results explain why a company called Juno Therapeutics raised $304 million when it went public in December, 16 months after its founding. In a coup of good timing, the venture capitalists and advisors who established Juno by licensing experimental T-cell treatments in development at Seattle Children’s, the Fred Hutchinson Cancer Research Center, and hospitals in New York and Memphis took the potential cancer cure public amid a historic bull market for biotech and for immunotherapy in particular. Its IPO was among the largest stock market offerings in the history of the biotechnology industry.

The T-cell therapies are the most radical of several new approaches that recruit the immune system to attack cancers. An old idea that once looked like a dead end, immunotherapy has roared back with stunning results in the last four years. Newly marketed drugs called checkpoint inhibitors are curing a small percentage of skin and lung cancers, once hopeless cases. More than 60,000 people have been treated with these drugs, which are sold by Merck and Bristol-Myers Squibb. The treatments work by removing molecular brakes that normally keep the body’s T cells from seeing cancer as an enemy, and they have helped demonstrate that the immune system is capable of destroying cancer. Juno’s technology for engineering the DNA of T cells to guide their activity is at an earlier, more experimental stage. At the time of its IPO, Juno offered data on just 61 patients with leukemia or lymphoma.
A caveat
Moving beyond the proof of principle won’t be easy. No one has ever manufactured a cellular treatment of any commercial consequence. It’s not certain what the best way to make and deliver such personalized treatments would be. Nor is it clear whether engineered T cells can treat a wide variety of cancers; this year Juno and others are launching new studies to find out. Even in leukemia, cancer that affects the bone marrow and blood, it’s too early to declare a cure. The majority of patients receiving the therapy have been treated only in the last 12 months. About 25 percent have seen their cancers roar back, sometimes mutated in a way that makes them immune to the T cells. At 18 months since his treatment, Wright, who hopes to become a police officer, is one of the longest survivors.


Energy is literally everywhere - the trick is capturing it - this is fascinating - the 4 min video is worth the view.
Scientists Capture the Energy of Evaporation to Drive Tiny Engines
Devices produce electricity from spores resting on water’s surface, but practical applications remain distant.
Harnessing the power from a fundamental process that’s happening constantly, all over the world, a team of scientists at Columbia University have devised tiny engines powered by evaporation. The devices generate electricity from the energy produced by bacterial spores known as Bacillus subtilis, which exhibit strong mechanical responses to changing relative humidity.

The spores expand when they absorb water and contract when they dry out. By controlling the moisture in the air, produced by evaporation, that the spores are exposed to, the devices grab the energy of these expansions and contractions to drive rotary or piston engines. The research was published on Tuesday in the journal Nature Communications.


This is an interesting article on the future of data, Big Data and Data Scientists. The podcast is 30 min.
The future of data at scale
The O'Reilly Radar Podcast: Turing Award winner Michael Stonebraker on the future of data science.
In March 2015, database pioneer Michael Stonebraker was awarded the 2014 ACM Turing Award “for fundamental contributions to the concepts and practices underlying modern database systems.” In this week’s Radar Podcast, O’Reilly’s Mike Hendrickson sits down with Stonebraker to talk about winning the award, the future of data science, and the importance — and difficulty — of data curation.

One size does not fit all
Stonebraker notes that since about 2000, everyone has realized they need a database system, across markets and across industries. “Now, it’s everybody who’s got a big data problem,” he says. “The business data processing solution simply doesn’t fit all of these other marketplaces.” Stonebraker talks about the future of data science — and data scientists — and the tools and skill sets that are going to be required:

It’s all going to move to data science as soon as enough data scientists get trained by our universities to do this stuff. It’s fairly clear to me that you’re probably not going to retread a business analyst to be a data scientist because you’ve got to know statistics, you’ve got to know machine learning. You’ve got to know what regression means, what Naïve Bayes means, what k-Nearest Neighbors means. It’s all statistics.

All of that stuff turns out to be defined on arrays. It’s not defined on tables. The tools of future data scientists are going to be array-based tools. Those may live on top of relational database systems. They may live on top of an array database system, or perhaps something else. It’s completely open.


This is a nice companion piece to the article above - the data tsunami looms.
Siloes be gone: how to take a 'joined up' approach to data governance
Businesses can gain the competitive edge by aligning core business and IT processes - but how do you eliminate siloes?
Business processes that rely on fast, efficient and coordinated IT resources are the foundation of all companies - large or small. From processing credit card orders to running bills or managing the supply chain, all of these standard activities require close coordination of IT with business.

However, organisational roles are often divided between those who manage core business processes and those who manage IT. With this disjointed approach to process management, the opportunity for miscommunication, data loss or error is far higher.

By 2016 Gartner predicts that 70% of the most profitable companies will manage their processes using real-time predictive analytics or extreme collaboration.  To realise this Gartner vision, businesses should look to achieve total IT governance; sharing relevant information across the organisation, streamlining business processes and making efficient use of resources.


For Fun
OK this is way cool - theres two very short videos.
US military to get hoverbikes after tie-up with UK company
The hoverbikes will be a new class of ‘Tactical Reconnaissance Vehicle’, the US Department of Defense says
The US has joined together with a UK company to help it build a hoverbike, like those seen in Star Wars.

The US Department of Defense, Maryland-based Survice Engineering and British Malloy Aeronautics will be working together to make a new hoverbike class of “class of Tactical Reconnaissance Vehicle (TRV), according to Malloy.

Malloy has already developed a version of the hoverbikes — which looks a little two drones with a platform between them — selling them through Kickstarter. The company raised over £64,000 last summer from 451 backers.


This is an very interesting article with lots of eye candy on the evolution of logos.
A branding expert’s visual breakdown of the year’s most popular logo trends
Since founding the popular online logo database LogoLounge in 2002, Gardner has seen more than 200,000 logos (some approved, some rejected) contributed by graphic designers from more than 100 countries around the world. Every year, he summarizes his observations in a logo design trends report, grouping them into style taxonomies, as a kind of pulse survey of visual branding.

Unlike a trader following market trends or a clothes horse following fashion trends, Gardner, a graphic designer himself, isn’t necessarily following logo trends for inspiration—and he doesn’t advise anyone else to. As he says in the introduction to this year’s report, as he has for the last 13 years: “Be educated by this, stand on the shoulders of others to advance our industry, but please do not consider this report a suggestion of what your next project should look like.”