Thursday, November 26, 2015

Friday Thinking 27 November 2015

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.

The end of lifelong employment works in two directions: Companies can offshore, outsource, and lay off workers more readily, but conversely, individuals can develop their own professional relationships and associations to maintain connections and employability. These networks are part of the asset base a worker brings to a company, a foundation of assistance on hand to help new hires succeed quickly.
The Blurry Corporation

Privacy, as it is conventionally understood, is only about 150 years old. Most humans living throughout history had little concept of privacy in their tiny communities. Sex, breastfeeding, and bathing were shamelessly performed in front of friends and family.

The lesson from 3,000 years of history is that privacy has almost always been a back-burner priority. Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.
The Birth And Death Of Privacy: 3,000 Years of History Told Through 46 Images

This is an very interesting, short article - outlining that even in the traditional organization - there is an increasingly requirement of collaboration with networks outside the organization and the way external tools are changing employees experiences of what work can be like.
The Blurry Corporation
What if computer systems ​were designed so that desk workers could collaborate, regardless of ​their​ employer, their location, or ​the​ time of day?
Who works for a company? That may seem like a simple question, but the reality is that companies collaborate constantly with people who are not their employees. There are ad-hoc collaborators, contract workers, external consultants, and so on, all of whom are working for—or at least with—people who are on staff. Yet increasingly the technology that companies purport to use doesn’t accommodate this basic fact.

Historically, what is known as enterprise computing—the hardware and software that drives business operations—has been devised and locked down inside corporations, in the hope of providing security, stability, and manageability. The technology companies who make this software have a particular vision in mind: a hive of activity on some sort of corporate campus, with all the work and collaboration taking place within that perimeter, or (occasionally) by some remote user.

There are a lot of people—those who are working outside the company or those who are in it only temporarily—for whom this does not work well. Beyond the Task Rabbits and Uber drivers, we are seeing a growing number of professionals for whom a corporation provides a specific set of needs and resources to either exploit or work around in the course of a career that involves multiple jobs, gigs, or contracts.
The enterprise model of software—fixated on stable roles, responsibilities, and locations—is taking a long time to adjust to this new type of career. Business IT is deeply influenced by a normative vision of “work” as something that occurs in a singular place, secured and enjoyed by authorized people who enter and exit at clearly defined times. But as anyone working today knows, the office is as proximate as their pocket. An architecture of connected computer systems makes work less about place and more about access across sites, times, and devices. This change in the quality of availability gives rise to new technical, logistical, and psychological challenges not just for workers but for managers and technology suppliers. From a traditional enterprise standpoint, mobile and cloud infrastructure can make intellectual property appear less secure and the work day less predictable. But for workers equipped with technology that allows them to define the terms of their own productivity, this dynamic environment reflects their growing autonomy.

The progress toward new forms of currency and finance continues - including a Canadian Bank.
ING and Other Top Banks Join R3 to Take the Next Step with Blockchain Technology
In September Bitcoin Magazine reported that nine global banks were pooling resources to fund R3, a next-generation global financial services company focused on applications of cryptographic technology and distributed ledger-based protocols within global financial markets.

R3 will seek to establish consistent standards and protocols for this emerging technology across the financial industry in order to facilitate broader adoption and gain a network effect, according to an R3 press release.

Several other top banks joined R3 soon thereafter.
Now, five more banks – ING, BNP Paribas, Wells Fargo, MacQuarie and the Canadian Imperial Bank of Commerce – are joining R3,Reuters reports. R3, now supported by most of the world’s major banks (with notable exceptions in China), represents the first high-profile collaborative project to find out how blockchain technology can be used in finance.

Thirty banks across the world are now partnering with R3, signaling a significant commitment to collaboratively evaluate and apply this emerging technology to the global financial system.

A potentially interesting webinar on 14 December - for anyone interested in discussions about the next economy.
Mapping the New Economy.
Please join the Real Economy Lab, the Next System Project and the New Economy Coalition on December 14 at 9am PT, noon ET, 5pm GMT for an interactive webinar discussion on mapping the next system.

The inability of traditional politics and policies to address fundamental challenges has fueled an extraordinary amount of experimentation, generating increasing numbers of sophisticated and thoughtful initiatives that build from the bottom and begin to suggest new possibilities for addressing deep social, economic and ecological problems. Thus we encounter the caring economy, the sharing economy, the provisioning economy, the restorative economy, the regenerative economy, the sustaining economy, the collaborative economy, the solidarity economy, the steady-state economy, the gift economy, the resilient economy, the participatory economy, the new economy, and the many, many organizations engaged in related activities.

There are calls for a Great and Just Transition, or for reclamation of the Commons. Many of these approaches already have significant constituencies and work underway. Creative thinking by researchers and engaged scholars is also contributing to the ferment. Although they vary widely in emphases and approaches, there is a good deal of commonality. These movements seek an economy that gives true priority to people, place, and planet.

Taking the next step in collective development will require better information on the array of organizations and initiatives active in this space as well as efforts to identify potential areas of cooperation and collaboration. Beyond that loom questions of scale and replicability. The Real Economy Lab (REL) has been surveying the landscape and identifying the linkages and is seeking to provide an interactive platform where the cumulative knowledge, aims, and resources of these movements can be drawn together in order to seek common ground and drive coordinated action.

In this webinar REL will present their work to date and invite you to join them and a panel of leading thinkers and practitioners in discussing these issues. We will hear about the work of REL as a connector of change makers in the next economy space, working to raise awareness and understanding of new economy theory and practice and help connect the thinkers and doers in this world for collaboration and movement building. REL will explain its theory of change and unique role in this evolving new economy ecosystem and walk us through one of their core tools, the mindmapping of the next economy ecosystem.

This is an interesting new project - that is well worth looking into because the implications are easily applied to social science research related to any behavior in the digital environment.
You Are Who You Play You Are?
GAMR seeks to understand the relationships between game play and real-life behavior
Video games offer fictional and fantastic worlds to explore, worlds in which our actions are free from concrete real-world consequences. They allow us to direct deep narratives, explore alternate realities, exercise cognitive skills, and experience complex social connections. All of this has made games the ascendant form of mass entertainment for several generations of players.

When we step into the fiction of video games, we choose–consciously and subconsciously–to act out a version of ourselves, or to deviate from and experiment with our identities. But how far can we (or do we) deviate from our real-world selves when we play?

Real-life skills, interests, and abilities might be reasonably expected to transfer into in-game behavior. But one key aspect of in-game behavior–unlike the real world–is that it can be extensively tracked and analyzed. This analysis is used frequently by game developers to understand their players better, seeking to make better games. If such analyses can be coupled with thoughtful analysis of real-world behavior, we can start to understand the relationship between the two.

GAMR (Game and Mind Research), a collaboration between the MIT Media Lab and the Games group at Tilburg University, is marrying detailed cognitive analysis of players to the deep data provided by their behavior in games like League of Legends, Battlefield, and World of Warcraft. From the way someone plays, is it possible to tell how social, conscientious, or creative they are? GAMR is setting out to find the answer to these questions and many others.

Launched in November 2015, GAMR is an exploratory study to pinpoint what cognitive traits can be determined from game behavior, and how. The study samples the full range of who, why, what, when, and how the player engages with video games, by finding the connections between two separate sets of data: players’ game behavior on the one hand, and their cognitive traits on the other.

This is a long article - but for anyone interested in understanding the centrality and vital importance of Good Design principles this is a MUST READ.
How Apple Is Giving Design A Bad Name
For years, Apple followed user-centered design principles. Then something went wrong.
Once upon a time, Apple was known for designing easy-to-use, easy-to-understand products. It was a champion of the graphical user interface, where it is always possible to discover what actions are possible, clearly see how to select that action, receive unambiguous feedback as to the results of that action, and have the power to reverse that action—to undo it—if the result is not what was intended.

No more. Now, although the products are indeed even more beautiful than before, that beauty has come at a great price. Gone are the fundamental principles of good design: discoverability, feedback, recovery, and so on. Instead, Apple has, in striving for beauty, created fonts that are so small or thin, coupled with low contrast, that they are difficult or impossible for many people with normal vision to read. We have obscure gestures that are beyond even the developer’s ability to remember. We have great features that most people don’t realize exist.

The products, especially those built on iOS, Apple’s operating system for mobile devices, no longer follow the well-known, well-established principles of design that Apple developed several decades ago. These principles, based on experimental science as well as common sense, opened up the power of computing to several generations, establishing Apple’s well-deserved reputation for understandability and ease of use. Alas, Apple has abandoned many of these principles. True, Apple’s design guidelines for developers for both iOS and the Mac OS X still pay token homage to the principles, but, inside Apple, many of the principles are no longer practiced at all. Apple has lost its way, driven by concern for style and appearance at the expense of understandability and usage.

Signals for the change in conditions of change and the geo-politics of energy.
139 Countries Could Get All of their Power from Renewable Sources
Energy from wind, water and sun would eliminate nuclear and fossil fuels
Mark Jacobson and Mark Delucchi have done it again. This time they’ve spelled out how 139 countries can each generate all the energy needed for homes, businesses, industry, transportation, agriculture—everything—from wind, solar and water power technologies, by 2050. Their national blueprints, released Nov. 18, follow similar plans they have published in the past few years to run each of the 50 U.S. states on renewables, as well as the entire world. (Have a look for yourself, at your country, using the interactive map below.)

The plans, which list exact numbers of wind turbines, solar farms, hydroelectric dams and such, have been heralded as transformational, and criticized as starry eyed or even nutty.

Determined, Jacobson will take his case to leaders of the 195 nations that will meet at the U.N. climate talks, known as COP 21, which begin in Paris on Nov. 29. His point to them: Although international agreements to reduce carbon dioxide emissions are worthwhile, they would not even be needed if countries switched wholesale to renewable energy, ending the combustion of coal, natural gas and oil that creates the vast majority of those emissions, and without any nuclear power. “The people there are just not aware of what’s possible,” says Jacobson, a civil and environmental engineering professor at Stanford University and director of the school’s Atmosphere and Energy Program. He is already scheduled to speak twice at the meeting, and will spend the rest of his time trying to talk one on one with national leaders and their aids.

And one more report about the looming shift in energy paradigm - a phase transition in energy geopolitics.
In 15 years, renewables will be the world’s biggest source of electricity
Coal is the biggest source of electricity worldwide. But not for much longer.
In a comprehensive report on the future of energy, the International Energy Agency, a policy and research group, said that renewables now sit in second place in the global electricity production mix.

By the 2030s, they will become the biggest source…

“The single largest energy demand growth story of recent decades is near its end,” the IEA wrote. “Recent years have seen a marked slowdown in global coal demand growth, led by China,” and China’s coal use is projected to have pretty much plateaued.
Meanwhile, China has become the biggest installer of renewable energy technologies.

Here’s more support to the speed of the change - or the  phase transition in the world’s energy situation.
Leapfrogging to Solar: Emerging Markets Outspend Rich Countries for the First Time
China alone is adding more renewables than the U.S., U.K., and France combined.
We all know the story of how mobile phones took off in emerging markets. Suddenly small cocoa farmers in Africa who never had a landline or a computer were checking commodity prices on their smartphones.

Today something similarly profound is starting to happen with renewable energy.
For the first time, more than half the world's annual investment in clean energy is coming from emerging markets instead of from wealthier nations, according to a new analysis by Bloomberg New Energy Finance. The handoff occurred last year, and it's just the beginning.

The chart below shows quarterly clean-energy investment in OECD countries vs. non-OECD countries. The trajectory is clear: If you’re a power plant salesperson, you’re probably going to be working with renewables in poor countries from now into the foreseeable future.  

The world recently passed a turning point and is adding more capacity for clean energy each year than for coal, natural gas, and oil combined. For that trend to continue, rapidly developing economies are critical.

This is a great TED Talk about the near future of medicine focused on osteo-arthritis - something many of us Baby-Boomers have looming.
Siddhartha Mukherjee: Soon we'll cure diseases with a cell, not a pill
Current medical treatment boils down to six words: Have disease, take pill, kill something. But physician Siddhartha Mukherjee points to a future of medicine that will transform the way we heal.

This is interesting - a perspective that views plants domesticating bacteria in order to fix nitrogen.
Harnessing a peptide holds promise for increasing crop yields without more fertilizer
Molecular biologists at the University of Massachusetts Amherst who study nitrogen-fixing bacteria in plants have discovered a "double agent" peptide in an alfalfa that may hold promise for improving crop yields without increasing fertilizer use.

In the current early online edition of Proceedings of the National Academy of Sciences, lead author and postdoctoral researcher Minsoo Kim, former undergraduate student Chris Waters, and professor Dong Wang of UMass Amherst's biochemistry and molecular biology department, with colleagues at the Noble Foundation in Oklahoma, report that alfalfa appears to use an advanced process for putting nitrogen-fixing bacteria, rhizobia, to work more effectively after they are recruited from soil to fix nitrogen in special nodules on plant roots.

As Wang and Kim explain, legumes attract nitrogen-fixing bacteria to their roots from the surrounding soil. Once inside the host plant, rhizobia form nodules on its roots and the plant starts to transform the bacteria into their nitrogen-fixing state. In return for borrowing the rhizobia's essential enzymes that turn nitrogen into useful ammonia, the plant gives the bacteria fixed carbon, the product of photosynthesis.

In alfalfa, this transformation of bacteria is called differentiation, which Wang likens to domestication, because it makes the bacteria reliant on their plant host. "They are no longer wild and able to live outside the plant," he says. "I think of it as analogous to domestication of animals by humans." He adds, "Bacteria that can no longer proliferate as free-living individuals are a bit like slaves at that point, living to serve the plant."

Here’s another potential gene-editing that could change the lives of a many many people - and prevent the spread of diseases with the advent of climate change.
Mosquitoes engineered to zap ability to carry malaria
Efficacy test of gene drive shows it works better in males than females
A new genetic engineering technique may quickly innoculate mosquitoes against malaria, helping to end the spread of the disease in humans.

Using a gene-editing tool known as CRISPR/Cas9, researchers have made a “genetic vaccine” that will continually inject itself into mosquitoes’ DNA. Such a vaccine, known as a gene drive, could spread to nearly every mosquito in a population within a few generations. The accomplishment, described online November 23 in the Proceedings of the National Academy of Sciences, brings researchers one step closer to malaria eradication.

“This work suggests that we're a hop, skip and jump away from actual gene drive candidates for eventual release,” says Kevin Esvelt, a synthetic biologist at Harvard University who was not involved in the work.

There will be more we will do with cells then replace pills - I’m imagining that a ‘cultured hotdog’ may sound like it might not break the ‘ick’ factor - but then just think - what’s in the normal hotdog? The products are worth the view.
10 Lab-Made Meats, Cheeses And Other Odd Startup Foods
Lab-grown hamburgers and Jarlsburg in a test tube? Several Silicon Valley scientists are producing edibles of the truly bizarre inside their labs or mixing up animal substitutes based on some interesting ingredients. Naturally (or not so much) we’re here to take you on a tour of some of this fascinating fare.

Not only new personalized genetics but implants are transforming to much less invasive capabilities.
The world’s smallest, minimally invasive cardiac pacemaker was successfully implanted in 99.2 percent (719 of 725) of patients participating in an international clinical trial.

The findings also showed that the Micra TPS—about the size of a large vitamin—met safety and effectiveness endpoints with wide margins.

Approximately 96 percent of patients experienced no major complications, which is 51 percent fewer than what is normally seen in patients with conventional pacing systems. Major complications included cardiac injuries (1.6 percent), complications at the groin site (0.7 percent), and pacing issues (0.3 percent).

For everyone who enjoys TED Talks - here’s a list of talks about revolution in medicine.
7 Innovations That Will Change the Future of Medicine
From a gel that can instantly stop traumatic bleeding to a laser for targeted treatment of HIV, these medical innovations created by TED Fellows could soon be saving lives.

This is an interesting visualization of plate tectonics - you can advance or reverse frame by frame - it provides an interesting complement to thinking about conditions on earth during the course of evolution.
Global Plate Reconstructions

From Global Plate reconstructions to new means of seeing life systems at micro-nanoscales - this is a 1 ½ hour video lecture by a 2014 Nobel Laureate that is fascinating. Literally ‘looking beyond boundaries’. He is currently exploring new frontiers. The important message is near the end - where he describes the difficulty and tremendous importance of ‘knowledge transfer’ in getting new technology into the hands of those who would know better where they can be used to generate knowledge. The master-apprentice relationship remains at the heart of knowledge management because knowledge is embodied and its transfer involves collective tacit knowledge.
Eric Betzig: Imaging Life at High Spatiotemporal Resolution
In this lecture, held on 3/9/15 at UC Berkeley, Nobel Laureate Eric Betzig, describes three areas focused on addressing the challenges of high resolution imaging: super-resolution microscopy; plane illumination microscopy using non-diffracting beams; and adaptive optics to recover optimal images from within optically heterogeneous specimens.

And here’s what’s coming to astronomy in by 2020. Big Data is changing how we do science - not just astronomy. How many answers lie in the data out there to questions we don’t know exist yet?
Andrew Connolly: What's the next window into our universe?
Big Data is everywhere — even the skies. In an informative talk, astronomer Andrew Connolly shows how large amounts of data are being collected about our universe, recording it in its ever-changing moods. Just how do scientists capture so many images at scale? It starts with a giant telescope …

But the telescopes we've used over the last decade are not designed to capture the data at this scale. The Hubble Space Telescope: for the last 25 years it's been producing some of the most detailed views of our distant universe, but if you tried to use the Hubble to create an image of the sky, it would take 13 million individual images, about 120 years to do this just once.

So this is driving us to new technologies and new telescopes,telescopes that can go faint to look at the distant universe but also telescopes that can go wide to capture the sky as rapidly as possible,telescopes like the Large Synoptic Survey Telescope, or the LSST,possibly the most boring name ever for one of the most fascinating experiments in the history of astronomy, in fact proof, if you should need it, that you should never allow a scientist or an engineer to name anything, not even your children. (Laughter) We're building the LSST.We expect it to start taking data by the end of this decade. I'm going to show you how we think it's going to transform our views of the universe, because one image from the LSST is equivalent to 3,000 images from the Hubble Space Telescope, each image three and a half degrees on the sky, seven times the width of the full moon.

Well, how do you capture an image at this scale? Well, you build the largest digital camera in history, using the same technology you find in the cameras in your cell phone or in the digital cameras you can buy in the High Street, but now at a scale that is five and a half feet across,about the size of a Volkswagen Beetle, where one image is three billion pixels. So if you wanted to look at an image in its full resolution, just a single LSST image, it would take about 1,500 high-definition TV screens.

And this camera will image the sky, taking a new picture every 20 seconds, constantly scanning the sky so every three nights, we'll get a completely new view of the skies above Chile. Over the mission lifetime of this telescope, it will detect 40 billion stars and galaxies, and that will be for the first time we'll have detected more objects in our universe than people on the Earth. Now, we can talk about this in terms of terabytes and petabytes and billions of objects,but a way to get a sense of the amount of data that will come off this camera is that it's like playing every TED Talk ever recorded simultaneously, 24 hours a day, seven days a week, for 10 years.

And to process this data means searching through all of those talks for every new idea and every new concept, looking at each part of the video to see how one frame may have changed from the next. And this is changing the way that we do science, changing the way that we do astronomy, to a place where software and algorithms have to mine through this data, where the software is as critical to the science as the telescopes and the cameras that we've built.

Here’s an innovation that is looking close to ready for primetime and may help many organizations transition into the 21st century by providing an super fast high bandwidth internal wireless network that is also more secure than current systems. Plus it would move us to better and greener forms of indoor lighting.
Real-World Testing Shows Li-Fi, A Wireless Technology Using Light Bulbs, To Be 100 Times Faster Than Wi-Fi
Real-world testing of Li-Fi, an advanced wireless communication technology, has found it to be 100 times faster than currently-available Wi-Fi networks. Reporting impressive data transmission rates of around 1 GB per second, the new system was recently trialled, for the first time, in office and industrial settings in Tallin, Estonia. At such speeds, it would take mere seconds to download a high-definition film of 1.5 GB.

Developed back in 2011 by scientist Harald Haas, of the University of Edinburgh, Li-Fi is a high speed, bidirectional wireless technology based on visible light communication (VLC). Using visible light ranging from 400 to 800 terahertz (THz), the technology can transmit data in binary code, by switching LED bulbs on and off within nanoseconds. The bulbs are turned on and off at too high a speed to be visible to the naked eye.

In the past, lab tests of Li-Fi have yielded incredibly high speeds of up to 224 gigabit per second. Apart from faster data transmission, Li-Fi boasts several advantages over current Wi-Fi technologies. Since light waves cannot pass through walls, it ensures less interference between different devices and greater security against hacking. Furthermore, unlike other networks, it can be safely used in sensitive areas, like hospitals, airplane cabins and nuclear power plants, without causing any kind of electromagnetic interference.

This is a long but fascinating article on the history of privacy - well worth the read for anyone interested in some perspective on the human experience in this domain that seems to be in crisis today.
The Birth And Death Of Privacy: 3,000 Years of History Told Through 46 Images
Privacy, as we understand it, is only about 150 years old.

Humans do have an instinctual desire for privacy. However, for 3,000 years, cultures have nearly always prioritized convenience and wealth over privacy.

Section II will show how cutting edge health technology will force people to choose between an early, costly death and a world without any semblance of privacy. Given historical trends, the most likely outcome is that we will forgo privacy and return to our traditional, transparent existence.

This is a new form of privacy - perhaps one we think we have been training ourselves to enact since …. mass media and the dependence on advertising - attentional autonomy.
I’d Like To Teach The World To Ad Block
I wish I could go door to door and help people install ad blockers. Door to door, like Don Draper meets Johnny Appleseed, but for ad blockers.

It may sound like I’m a turncoat selling out my own profession. So before you start yelling, here’s the high-level argument:

  • Most of what ad blockers would block was junk/fraud anyway. Digital advertising is in a crisis — or a “subprime state,” if you will. In addition to a flood of near-valueless, low-viewability impressions, fraud is everywhere, whether as bots or ad injection. The race to the bottom seems to have reached its destination.
  • The short-term pain suffered by quality publishers, will be outweighed by the long-term benefit of a more accurate representation of how much real human attention there is.
  • Ad blockers are easy to detect, so publishers can easily ask consumers to turn them off. And for quality content, people will.
  • Consumers are paying for mobile data usage, and even non-fraudulent ad technology creates horrible user experiences and egregious load times. Consumers should have more control over their own data, so resetting the industry to an opt-in advertising model would be a good thing.

Anyone interested in the concept of collective intelligence should have heard of Pierre Levy. But for those who are interested who have read his work here’s a recent blog post.
My Research in a Nutshell
The human species can be defined by its special ability to manipulate symbols. Each great augmentation in this ability has brought enormous economic, social, political, religious, epistemological, educational (and so on) changes.

I think that there has been only 4 of these big changes. The first one is related to the invention of writing, when symbols became permanent and reified. The second one corresponds to the invention of the alphabet, indian numerals and other small groups of symbols able to represent “almost everything” by combination. The third one is the invention of the printing press and the subsequent invention of electronic mass media. In this case, the symbols were reproduced and transmitted by industrial machines. We are currently at the beginning of a fourth big anthropological change, because the symbols can now be transformed by massively distributed automata in the digital realm. My main hypothesis is that we still did not have invented the symbolic systems and cultural institutions fitting the new algorithmic medium. So my research in the past 15 years has been devoted to the invention of a symbolic system able to exploit the computational power, the capacity of memory and the ubiquity of the Internet. This symbolic system is called IEML, for Information Economy MetaLanguage.

It is :
(1) an artificial language that automatically computes its internal semantic relations and translates itself into natural languages,
(2) a metadata language for the collaborative semantic tagging of digital data,
(3) a new addressing layer of the digital medium (conceptual addressing) solving the semantic interoperability and network efficiency problems,
(4) a programming language specialized in the design of semantic networks,
(5) a semantic coordinate system of the mind (the semantic sphere), allowing the computational modeling of human cognition and the self-observation of collective intelligences.

For Fun
This is totally and tonally Cool - the sound of knowledge self-management on a global scale. And the visuals are lovely as well. The is like the ‘whale song of human knowledge’. This is definitely a MUST VISIT.
Listen to Wikipedia
Listen to the sound of Wikipedia's recent changes feed. Bells indicate additions and string plucks indicate subtractions. Pitch changes according to the size of the edit; the larger the edit, the deeper the note. Green circles show edits from unregistered contributors, and purple circles mark edits performed by automated bots. You may see announcements for new users as they join the site, punctuated by a string swell. You can welcome him or her by clicking the blue banner and adding a note on their talk page.

This project is built using D3 and HowlerJS. It is based on Listen to Bitcoin by Maximillian Laumeister. Our source is available on GitHub, and you can read more about this project.

Built by Stephen LaPorte and Mahmoud Hashemi.

I love this - I don’t know how many bottles or tubes of glue that I’ve owned had to be thrown out because they dried out.
This Dry Glue Only Becomes Sticky When You Crush It
Usually, dry glue is a sign that you need a new tube of adhesive. But researchers in Japan have developed a new type of glue that’s perfectly dry until you crush it—at which point it becomes super sticky.

Adhesives that are solid at room temperature aren’t anything new, of course, but usually they require heat to melt them before use. What’s different about these small balls of glue, then, is that only mechanical force is required to activate their sticky potential.

That’s possible because they’re actually beads of liquid latex coated in calcium-carbonate nanoparticles. You can roll them around your hand or store them in a pot without any adhesive action taking place, but when compressed the liquid latex bursts forth. The research was carried out by scientists from Osaka Institute of Technology in Japan and is published in Materials Horizons.

The team points out that the glue is stronger than the kind of pressure-sensitive adhesives you find on Post-It notes, and reckons it could prove useful in applications where getting glue into a space without it sticking to things along the way is a concern.

The dust mote computer looms ever closer. A 2 ½ min video explains.
Raspberry Pi’s latest computer costs just $5
Raspberry Pi has just unveiled the Pi Zero, a programmable computer that costs only $5 — about as much as a cup of coffee.

Available from today, the Pi Zero is the charity organization’s smallest computer ever and packs enough power and components to match up to other offerings in the Pi family. In fact, it’s half the size of the Model A+ released last year, but offers twice the power.

For a measly $5, you get a Broadcom BCM2835 application processor that’s 40 percent faster than the Raspberry Pi 1 and 512MB RAM. There’s also a microSD memory card slot, a mini-HDMI socket for video output at Full HD resolution and 60 frames per second, and Micro-USB sockets for data and power.