Thursday, September 17, 2015

Friday Thinking 18 September 2015

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.

“By the time you have an app like Uber or Instagram, you are at the tenth layer of a technology stack. You don’t have Uber without Google Maps, and you don’t have Google Maps without GPS and satellite imaging and the Internet and the iPhone and LTE and miniaturization and all these things.”
Here’s how primitive the infrastructure is now: The Bitcoin blockchain can currently handle 7 transactions a second, while credit card giant Visa can process 56,000. No wonder Bitcoin players liken the current state of the technology to the Internet of 1994 and 1995, a time when going online meant dialing up AOL and connecting at 14.4 kilobytes per second.
Bitcoin's Shared Ledger Technology: Money's New Operating System

Nearly everyone in the room felt that because of the internet, no one cared about privacy anymore. To be private at all was an outmoded concept. Identity was continually seen as something exclusively living in databases powerful others kept about you, to sell or mine or monitor. This idea pissed me off. The net is new, and we don't know how to live there. It doesn't mean we think anyone should be able to know our thoughts and see us naked. It means understanding and custom hasn't caught up to the reality of how we live our lives nowadays.

Custom is always more important than law, and eventually custom adapts to new ways of being together. I talked about how custom could adapt to life on the network, and cited the urban migration as a place to look for historical examples. We don't listen in on cafe conversations or leave our bathrooms doorless, not because of laws, but because that would be awful. Increasingly on the network, like city life, it's not merely about what you can manage to physically see. The act of looking makes you a creepy fucker.

It was not lost on everyone in the room that I was calling the people who watch -- such as the IC -- creepy fuckers. Not everyone in the community is comfortable with where it has gone, but they've also spent the last decade bombarded with the message that no one cares about privacy. "People give up their privacy for the chance at a $.10 off coupon!" one person declared, to general agreement.

It was a room that had written off privacy as an archaic structure. I tried to push back, not only by pointing out this was the opening days of networked life, and so custom hadn’t caught up yet, but also by recommending danah boyd's new book It’s Complicated repeatedly. (Disclosure: I was a paid editor/consultant on the book) The room didn't understand that privacy is part of human life and we all want it. Their institutions were invested in not learning that people want private lives. They would be forced to either stop what they were doing, or give up their core identity -- that of the good guys.

From the first discussion and through the whole day there was a lot of talk of virtual vs “real” identity. It was once understood that legal identity is as much a fiction as any other invented identity, but this seems long forgotten in these places. Legal identity, far from being “real” or “physical” as people in our government think of it now, was identity based on forms of legal liability constructed for political benefit to the state. It was about taxing you, drafting you, moving you, or throwing you in jail — legal identity was never concerned with reality. I found the conflation of that with either reality or physicality to be the single most chilling thing I encountered in that room, even moreso than surveillance. When the men with guns can't tell the difference between you and your social security number, the mechanisms of power have lost vital critical thinking.
Quinn Norton - A Day of Speaking Truth to Power - Visiting the ODNI

This is a MUST VIEW 6 min video - it’s the future of storytelling, of art, of design, of learning….
Glen Keane – Step into the Page
2015 Future of StoryTelling Summit Speaker: Glen Keane
Animator, The Little Mermaid, Tarzan, Beauty and the Beast, and Duet
Keane's VR painting is created in Tiltbrush:

Over nearly four decades at Disney, Glen Keane animated some the most compelling characters of our time: Ariel from The Little Mermaid, the titular beast in Beauty and the Beast, and Disney’s Tarzan, to name just a few. The son of cartoonist Bil Keane (The Family Circus), Glen learned early on the importance of holding onto your childhood creativity—and how art can powerfully convey emotion. Keane has spent his career embracing new tools, from digital environments to 3D animation to today’s virtual reality, which finally enables him to step into his drawings and wander freely through his imagination. At FoST, he'll explore how to tap into your own creativity, connecting to emotion and character more directly than ever before.

I remember the first time I saw a ‘flavor wheel’ a circle with the names of all sorts of possible flavors - and I realized that such an aid could let me appreciate a wine, scotch, coffee, chocolate …… much more profoundly. I realized that most of the time I had no words to name what I was tasting. Recently I experienced this in my home roasted coffee (yes it’s so inexpensive and wondrous to roast your own coffee). I had been told that the batch of Yirgacheffe had ‘blueberry notes’. I had never tasted any such notes in coffee before - but the first time I tasted this coffee - I was able to recognize Yes Blueberry. Later my son tasted the coffee and noted something unique and when I said ‘blueberry?’ He immediately recognized the taste.
Suppose we had names for new emotions?
Obscure Emotions Make Us Speechless, but Let’s Give Them a Name
The human mind is complicated and wondrous; we are able to experience such deep and complex emotional states.  Some of our emotional perceptions are so vivid and awesome they can literally stop you in your tracks and cause you to re-organize your beliefs, so life can make sense again.

Here is a list of emotions you may have felt but could not describe with one single word, until now.

It would be pretty cool if we could unleash these words from ‘The dictionary of obscure Sorrows’ and begin using them more commonly, although I’m sure most people would simply nod, then rush home to look up the dictionary definition for next time.
7 obscure emotions and their meanings
With short videos.
for example
Onism: The feeling that in your whole life you’ll only get to experience a little piece of the wide, wide world.

Here’s the site
The dictionary of obscure Sorrows

And while the topic is on new forms of feeling and sensation here’s something looming on the horizon - just think when anyone can connect with a distant prosthetic and feel ‘something’.
Paralyzed Man Successfully Given Prosthetic Hand That Can 'Feel'
A 28 year old man who has been paralysed has been given a new sense of touch following a new breakthrough that saw electrodes placed directly into the man’s brain.

The research and clinical trial has been carried out by DARPA, the US Military’s research agency. Essentially, the man (who has not been named) is now able to control his new hand and feel people touching it because of two sets of electrodes: one array on the motor cortex, the part of the brain which directs body movement, and one on the sensory cortex, which is the part of the brain which feels touch.

The prosthetic hand itself was developed by the Applied Physics Laboratory at John Hopkins University, and contains torque sensors that can detect when pressure is being applied to the fingers, and will generate an electrical signal containing this information to send to the brain.

And talking about remote sensing here’s something that looms very close - within a decade our transportation environment might look the same - and be unrecognizable.
New York is getting wired with traffic signals that can talk to cars
Behind self-driving, vehicle-to-vehicle (V2V), and vehicle-to-infrastructure (V2I) communication is one of the biggest sea changes in transportation technology on the horizon — it could have an enormous impact on driving safety, if it's implemented quickly and correctly. The concept is pretty simple: cars, signs, and traffic signals all communicate to one another over Wi-Fi-like airwaves, so that drivers (and automatic safety systems built into cars) have more information about the traffic and environment around them. (I got a compelling demo of V2V tech put on by Ford at CES a couple years ago, and I can say that the promise is pretty huge.)

There's no federal rule in place for requiring V2V yet, but the US Department of Transportation is hoping to get those rules in place by the end of this year — and in the meantime, it's rolling out huge new pilot programs to put the technology to the test. In the New York City boroughs of Manhattan and Brooklyn, traffic signals will be equipped with V2I hardware, while up to 10,000 city-owned vehicles will be outfitted with V2V. (It's unclear whether drivers of these vehicles will have access to the data through instrumentation, or whether it's just being collected as part of the DOT's ongoing V2V research.)

As part of the same announcement, the DOT is awarding $17 million to Tampa to try to alleviate rush hour congestion with V2V tech and "to protect the city's pedestrians by equipping their smartphones with the same connected technology being put into the vehicles," while the state of Wyoming will be spinning up a pilot program to track heavy-duty trucks along Interstate 80.

The promise of V2V is pretty huge: imagine being warned of a chain-reaction collision several cars in front of you that you can't see, for instance, or a disabled truck that can let you know to stay clear of the right lane ahead. And it might not be that far off — even though the rules aren't set in stone yet, GM has already committed to starting its rollout to production cars sometime in 2016. In fact, the DOT's press release is even more optimistic, saying we could see V2V in "early 2016."

Patents and copyright were developed primarily to help increase innovation - and originally the timeframes where relatively short. This is no longer the case - as the logic has now become one of protecting intellectual property - thereby making ‘non-rival’ knowledge artificially scarce with the result that innovation is reduced.
Biomedical patents reduce innovation by 30%
Is intellectual property necessary for innovation? Is it counterproductive? For the first time, the publication of significant quantities of evidence from the Human Genome Project demonstrates the latter.
As much as the official discourse would like it to be, the debate on intellectual property is not about whether authors or inventors would earn the same thing or more if this legal monopoly was abolished. The question is whether we need rents from a monopoly that only exists thanks to legislation for innovation to exist and whether more innovation is created with protection from intellectual property or without it.

In the field of theory, Michele Boldrin made a fundamental contribution which is now part of the corpus of economic theory by demonstrating that under certain conditions, which are common and widespread today, that incentive is not necessary.

Emprical evidence however, in fields like the biomedical and pharmaceutical industry, was scarce, though it did point to the innovator having incentives beyond patents that would be sufficient to justify and profit amply from R&D.

The type of evidence necessary is two similar innovations, one patented and the other not, coexisting in the market from the outset.
The definitive case: the human genome
We surely owe the definitive empirical proof to the recent paper by Heidi Williams, a Ph.D. student in Economics at Harvard University. Williams analyzes the consequences of the Human Genome Project, whose results from the sequencing of the genome belong to the public domain, with those of Celera, a business that hoarded its results under patents.

What’s interesting is that there are genes that were originally protected by Celera, which, by being resequenced through public effort, then became patent-free. This way, Williams could really do two different studies: in one, she compared the impact of patented genes with genes in the public domain from the moment of their sequencing, and in the other, the result of genes that were originally Celera’s being devolved to the public domain.

The result in both cases was similar: patents decreased innovation and its results by 30%. Additionally, in the cases where Celera enjoyed a brief period of monopoly, the negative effects on innovation were maintained, though at a smaller scale, after the gene sequencing was released. That is, the negative effects of intellectual property on innovation tend to persist even after the end of legal protection.

If we extend these conclusions to other settings of intellectual property, we’ll understand, for example, why books in the public domain lead to new editions and translations with more regularity...

Speaking about biotechnology innovation - here’s something already through the threshold.
Why the Future of Drugs Is in Genetically Engineered Microbes
Though a new generation of genetically engineered microbes is raising fears about home-brew heroin, a technology de-coupled from the whims of growing seasons could also mean cheaper, legal drugs.
Earlier this year, experts and law enforcement agencies worried about the possibility of cheap, home-brew heroin when two teams of scientists reported that they had created genetically engineered yeast strains that could almost make morphine from sugar. With morphine-making yeast, a would-be Walter White could brew heroin in an Albuquerque basement without relying on opioids extracted from poppies grown halfway across the world. The new genetically engineered yeast couldn’t quite go all the way from sugar to morphine, but it was clearly only a matter of time before someone made a strain that could.

Three months later, a third team of scientists, led by synthetic biologist Christina Smolke at Stanford University, finished the job. In a paper out last month, the scientists describe two genetically engineered yeast strains that metabolize sugar into opioids.

While the risk of home-brewed heroin posed by these yeast strains is small—the opioid yield is very low, and, instead of morphine, the yeast make opioids that are harder to process into heroin—the implications of this work are large. These genetically engineered microbes are part of a major technical advance that could change how we make, distribute, price, and regulate drugs.

With morphine-making yeast, a would-be Walter White could brew heroin in an Albuquerque basement without relying on opioids extracted from poppies grown halfway across the world.

Here’s something that will be totally exciting for many many youngsters - but maybe not for their parents - who will be pressed to get their kids their own mobile devices and wearables. But that is something that is inevitable anyway. This is the future of gaming.
The kids who will be playing this game are going to be the future recruits of public services and corporate businesses - What are they going to expect - MS Word 10? Cubicles?
Pokémon Comes to Smartphones, Led by Ex-Google Game Studio
For nearly 20 years, Pokémon has been a killer franchise for Nintendo’s handheld gaming consoles, from the Game Boy to today’s Nintendo 3DS. But times are changing, Nintendo is finally looking to smartphones and the next iteration of the monster-collecting game is getting a serious mobile rethink.

The new title, Pokémon Go, is being developed by Niantic Labs, a game studio that was formerly part of Google until the company reorganized to become Alphabet. Niantic previously developed Ingress, a mobile game that tracked players’ locations and rewarded them for traveling around the real world.

It looks as though Pokémon Go, slated for release next year, will echo Ingress. As fans of the game series know, the conceit is that strange creatures roam the world and can be caught, traded and battled, like a weird hybrid of pet ownership and cockfighting.

“For the first time with this game, Pokémon are going to roam free in the real world,” Niantic CEO John Hanke said at a press conference.

Players will use their Android and iOS devices to see and interact with the titular pocket monsters in their neighborhoods, or wherever they travel. It’s a telling indicator of where the winds have been blowing in on-the-go gaming.

Also incoming: A wearable device called Pokémon Go Plus that vibrates to alert players of new things to do in the game. It looks like something only a superfan would buy, but it’s probably a hell of a lot cheaper than a smartwatch. Pricing details for the wearable device were not immediately available, and the game is said to be a free download — but left unsaid is how the company will monetize those downloads.

Here’s a 25 min video talking about Niantic Labs (from last year) who’s developing this game and also developed Ingress. This is worth the view for anyone interested in the future of gaming via wearables and probably also learning in the world via Augmented and Virtual Reality. The world is the game.
Niantic Labs' John Hanke - "Adventures on Foot" - D.I.C.E. 2014 Summit
John leads an innovative "startup" within Google called Niantic Labs. Niantic was founded by John as an independent group within Google to explore new kinds of mobile applications at the intersection of mobile, location, social, and with eye towards an emerging class of wearable devices. The group has launched two very well received products to date - Field Trip, a guide to the hidden secrets and amazing places of the world, and Ingress, a mobile app that turns the entire world into an interactive, multiplayer game.

This is a MUST READ related article - but not about games - rather it is how we are embedded in a flow of information and data within which we create new data and data trails. This is the looming advent of new forms of behavioral data collection - a reminder that the traditional survey (as a primary instrument of social science data gathering) is dead. HR, personnel, medical research functions that aren’t preparing to implement and use these methods - are inviting a ‘Kodak Moment’ of disruption.
How new data-collection technology might change office culture
Employers experimenting with personal data collecting to boost performance
Imagine a tiny microphone embedded in the ID badge dangling from the lanyard around your neck.

The mic is gauging the tone of your voice and how frequently you are contributing in meetings. Hidden accelerometers measure your body language and track how often you push away from your desk.

At the end of the end of each day, the badge will have collected roughly four gigabytes worth of data about your office behaviour.

Think this is far-fetched? Well, last winter employees at the consulting firm Deloitte in St. John's, Nfld., used these very badges, which are being touted as the next frontier in office innovation.

The Humanyze badges are just one of many data-driven tools that some advanced workplaces are testing in a bid to improve efficiency and communication. The tools range from complex email scanning programs to simple fitness trackers, such as Fitbits, that measure sleep patterns and movement.

Ben Waber, CEO of Humanyze, says he envisions the sensor-equipped badges will become ubiquitous. He and his colleagues developed the badges and analytical models while working on their PhDs at the Massachusetts Institute of Technology.

This is an interesting article that should make us consider not only the curricula of educational programs - but how we develop people as professionals.
The Correlation Between Arts and Crafts and a Nobel Prize
Is it good for engineers and scientists to have artistic pursuits? Surely they and the world would be better off if neuroscientists focused more on neuroscience, not hooking up an EEG to a music visualizer? Is there any point in a quantum physicist writing operas?

It turns out that even for individuals, the interaction between science and art is actually pretty complicated. It seems avocational creativity discoveries of professional scientists go hand in hand: the more accomplished a scientist is, the more likely they are to have an artistic hobby.

The average scientist is not statistically more likely than a member of the general public to have an artistic or crafty hobby. But members of the National Academy of Sciences and the Royal Society -- elite societies of scientists, membership in which is based on professional accomplishments and discoveries -- are 1.7 and 1.9 times more likely to have an artistic or crafty hobby than the average scientist is. And Nobel prize winning scientists are 2.85 times more likely than the average scientist to have an artistic or crafty hobby.

I personally don’t think this is the end of journalist - I think this is really the beginning of a whole new horizon for enabling journalist to engage in investigative and critical journalism. But the challenge isn’t only faced by journalists - other that will be challenged to ‘meta’ their game include scientist creating research finding reports and business analysts creating business reports - and many more.
End of the road for journalists? Robot reporter Dreamwriter from China’s Tencent churns out perfect 1,000-word news story - in 60 seconds
Chinese social and gaming giant Tencent published its first business report written by a robot this week, ramping up fears among local journalists that their days may be numbered.

The flawless 916 -word article was released via the company’s portal, an instant messaging service that wields much sway in China, a country now in the throes of an automation revolution.

“The piece is very readable. I can’t even tell it wasn’t written by a person,” said Li Wei, a reporter based in the southern Chinese manufacturing boomtown of Shenzhen.

It was written in Chinese and completed in just one minute by Dreamwriter, a Tencent-designed robot journalist that apparently has few problems covering basic financial news.

In an interview with Wired magazine a few years ago, Narrative Science co-founder Kris Hammond predicted that "more than 90 per cent" of the news in the US would be written by computer programmes by 2027.

When quizzed on how long it would take for his robot to win a Pulitzer Prize, Hammond, at the time a professor of computer science and journalism at Northwestern University, unblinkingly answered: "Five years".
That was three years ago.

How far into the future is it until we have some sort of ubiquitous digital interface between our minds and the world?
How to Catch Brain Waves in a Net
A mesh of electrodes draped over the cortex could be the future of brain-machine interfaces
Last year, an epilepsy patient awaiting brain surgery at the renowned Johns Hopkins Hospital occupied her time with an unusual activity. While doctors and neuroscientists clustered around, she repeatedly reached toward a video screen, which showed a small orange ball on a table. As she extended her hand, a robotic arm across the room also reached forward and grasped the actual orange ball on the actual table. In terms of robotics, this was nothing fancy. What made the accomplishment remarkable was that the woman was controlling the mechanical limb with her brain waves.

The experiment in that Baltimore hospital room demonstrated a new approach in brain-machine interfaces (BMIs), which measure electrical activity from the brain and use the signal to control something. BMIs come in many shapes and sizes, but they all work fundamentally the same way: They detect the tiny voltage changes in the brain that occur when neurons fire to trigger a thought or an action, and they translate those signals into digital information that is conveyed to the machine.

To sense what’s going on in the brain, some systems use electrodes that are simply attached to the scalp to record the electroencephalographic signal. These EEG systems record from broad swaths of the brain, and the signal is hard to decipher. Other BMIs require surgically implanted electrodes that penetrate the cerebral cortex to capture the activity of individual neurons. These invasive systems provide much clearer signals, but they are obviously warranted only in extreme situations where doctors need precise information. The patient in the hospital room that day was demonstrating a third strategy that offers a compromise between those two methods. The gear in her head provided good signal quality at a lower risk by contacting—but not penetrating—the brain tissue.

The patient had a mesh of electrodes inserted beneath her skull and draped over the surface of her brain. These electrodes produced an electrocorticogram (ECoG), a record of her brain’s activity. The doctors hadn’t placed those electrodes over her cerebral cortex just to experiment with robotic arms and balls, of course. They were trying to address her recurrent epileptic seizures, which hadn’t been quelled by medication. Her physicians were preparing for a last-resort treatment: surgically removing the patch of brain tissue that was causing her seizures.

Here’s another step forward in machine learning.
Unsupervised Computer Learns Spoken Language
System learns to distinguish words’ phonetic components, without human annotation of training data.
Every language has its own collection of phonemes, or the basic phonetic units from which spoken words are composed. Depending on how you count, English has somewhere between 35 and 45. Knowing a language’s phonemes can make it much easier for automated systems to learn to interpret speech.

In the 2015 volume of Transactions of the Association for Computational Linguistics, MIT researchers describe a new machine-learning system that, like several systems before it, can learn to distinguish spoken words. But unlike its predecessors, it can also learn to distinguish lower-level phonetic units, such as syllables and phonemes.
As such, it could aid in the development of speech-processing systems for languages that are not widely spoken and don’t have the benefit of decades of linguistic research on their phonetic systems. It could also help make speech-processing systems more portable, since information about lower-level phonetic units could help iron out distinctions between different speakers’ pronunciations.

Unlike the machine-learning systems that led to, say, the speech recognition algorithms on today’s smartphones, the MIT researchers’ system is unsupervised, which means it acts directly on raw speech files: It doesn’t depend on the laborious hand-annotation of its training data by human experts. So it could prove much easier to extend to new sets of training data and new languages.

Finally, the system could offer some insights into human speech acquisition. “When children learn a language, they don’t learn how to write first,” says Chia-ying Lee, who completed her PhD in computer science and engineering at MIT last year and is first author on the paper. “They just learn the language directly from speech. By looking at patterns, they can figure out the structures of language. That’s pretty much what our paper tries to do.”

Understanding the trajectories of mind-computer interfaces transforms the Artificial Intelligence versus the Human dilemma into one that McLuhan noted - “technology is the most human thing about us’. This would be Augmented Intelligence instead. The issue is human enhancement - think about search and the Internet as our working memory? Or search, working memory, augmented learning and blistering analytic capabilities.
Neural network chess computer abandons brute force for selective, ‘human’ approach
A chess computer has taught itself the game and advanced to ‘international master’-level in only three days by adopting a more ‘human’ approach to the game. Mathew Lai, an MsC student at Imperial College London, devised a neural-network-based chess computer dubbed Giraffe [PDF] – the first of its kind to abandon the ‘brute force’ approach to competing with human opponents in favour of a branch-based approach whereby the AI stops to evaluate which of the calculated move branches that it has already made are most likely to lead to victory.

Most chess computers iterate through millions of moves in order to select their next position, and it was this traditional ‘depth-based’ approach that led to the first ground-breaking robot>human chess victory in 1997, when IBM’s Big Blue beat reigning world champion Garry Kasparov.

Lai sought instead to create a more evolutional end-to-end AI, building and improving on previous efforts which sought to leverage neural networks, but which paid performance penalties, and faced logical issues about which of the potential millions of ‘move branches’ to explore efficiently.

’With all our enhancements, Giraffe is able to play at the level of an FIDE International Master on a modern mainstream PC. While that is still a long way away from the top engines today that play at super-Grandmaster levels, it is able to defeat many lower-tier engines, most of which search an order of magnitude faster. One of the original goals of the project is to create a chess engine that is less reliant on brute-force than its contemporaries, and that goal has certainly been achieved.’

In the domain of AI here’s another ‘dot’ in the matrix of progress.
Honda cleared for autonomous vehicle testing in California
Honda has got the green light to trial its autonomous cars in sunny California, making it the tenth automaker to self-drive in the Golden State.

California is one of a handful of states, along with Michigan, Florida and Nevada, that have passed legislation enabling testing of self-driving cars on public roads.

If you haven’t sensed it by now, self-driving cars are the future of transportation.
Honda first revealed its technology regarding autonomous vehicles last summer when the company showcased a prototype Acura RLX powered with all the sensors and computing technology required for self-driving vehicles.

As of now, Honda Motors has activated its advanced driver-assistance systems in all its Honda and Acura models, and is fast rising in the self-driving cars technology to the delight of Honda loyalists.

Other auto giants that have been listed to test-drive their autonomous cars on Californian roads are Google Inc., Volkswagen AG, Daimler AG’s Mercedes Benz, Tesla Motors Inc, Nissan Motor Co.

This is a 58 min video - that is perhaps best summarized as “information visualization is not about nice pictures - it’s about insight”. Worth the view.
Ben Shneiderman Presents Knowledge Discovery
Professor Ben Shneiderman discusses Information Visualization for Knowledge Discovery during the College of Computing and Informatics' 2010-2011 Distinguished Lecture Series.

This is less weird than it sounds - but is a very interesting example of how different scientific paradigms with their associated metaphors can help bring new form of understanding and even new types of mathematical models to bear. What interesting is that often people don’t have a ‘viewpoint’ on an issue - until someone poses a question.
There’s a 60 min video as well. Rather than using traditional probability theory to understand human decisioning - this explore quantum probability theory to examine how humans engage in decisions. Worth the view  
You're not irrational, you're just quantum probabilistic: Researchers explain human decision-making with physics theory
A new trend taking shape in psychological science not only uses quantum physics to explain humans' (sometimes) paradoxical thinking, but may also help researchers resolve certain contradictions among the results of previous psychological studies.
According to Zheng Joyce Wang and others who try to model our decision-making processes mathematically, the equations and axioms that most closely match human behavior may be ones that are rooted in quantum physics.

"We have accumulated so many paradoxical findings in the field of cognition, and especially in decision-making," said Wang, who is an associate professor of communication and director of the Communication and Psychophysiology Lab at The Ohio State University.

"Whenever something comes up that isn't consistent with classical theories, we often label it as 'irrational.' But from the perspective of quantum cognition, some findings aren't irrational anymore. They're consistent with quantum theory—and with how people really behave."

In two new review papers in academic journals, Wang and her colleagues spell out their new theoretical approach to psychology. One paper appears in Current Directions in Psychological Science, and the other in Trends in Cognitive Sciences.

Their work suggests that thinking in a quantum-like way—essentially not following a conventional approach based on classical probability theory—enables humans to make important decisions in the face of uncertainty, and lets us confront complex questions despite our limited mental resources.
The video is called
Joyce Wang - "An a priori and Parameter‐Free Quantum Model for Cognitive Measurement Order Effects"
Here’s a paraphrase on order effect:
Attitudes are not entities simply retrieved from memory and recorded - instead they are constructed as needed
Thoughts constructed from a first question change the context used for evaluating the next questions - causing order effects.
Quantum theory provides an elegant way to formalize this intuition.

Here’s an interesting Forbes article about the rapid development of the blockchain and how the ‘big players’ are adopting it before they are disrupted. This is worth the view for anyone interested in the future of the world of finance and currency.
Bitcoin's Shared Ledger Technology: Money's New Operating System
...the three Millennials (ages 33, 34 and 29) and their company,, had begun selling software tools to help developers create Bitcoin apps. It was a seemingly inauspicious time to be staking your future on the digital currency. Within the previous year the FBI had shut down the biggest Bitcoin enterprise, underground drug bazaar Silk Road; the dominant Bitcoin exchange, Mt. Gox, had gone belly-up with more than $450 million missing; and Bitcoin, the currency, had started to plummet in value from a $1,240 speculative peak in December 2013 to its current $230.

Yet during the teleconference the jeans-clad entrepreneurs told the Nasdaq suits that they believed the technology underlying Bitcoin would bring about a once-in-a-lifetime seismic shift in the financial industry, shrinking its current profits and workforce but also creating many new markets and opportunities. Bitcoin isn’t merely the cryptocurrency that has caught the imagination of the antiestablishment and enriched a few lucky speculators. In fact, that Bitcoin is merely an app. The underpinnings–known as “the blockchain” or “distributed ledger” technology–are nothing less than a vastly faster, cheaper and more secure way to manipulate money electronically. The blockchain is poised to become the dial tone for the 21st-century global economy.
Gil Luria, a financial tech analyst at L.A.-based Wedbush Securities, estimates that a fifth of U.S. GDP–around $3.6 trillion–is generated by industries that will be disrupted, or at least made more efficient, by this new technology. But it is the financial services industry, and its hundreds of billions of profits, that faces the most immediate threat.

The Nasdaq execs had been studying Bitcoin for a year, and they knew all this. But picking the right partners was crucial. So they peppered’s founders with questions about their vision and, more important, whether they could execute. The answer: The wild-haired Gundry and Smith had the product and software chops, while CEO Ludwin, with his neatly trimmed beard and tortoiseshell glasses, was a Harvard M.B.A. and former venture capitalist with a keen strategic view.

“People want to believe that there’s going to be this mythical coin that comes out of Silicon Valley that the world starts using and that all of Wall Street just falls into the ocean,” Ludwin muses. But that is simply not going to happen. Layers of infrastructure need to be built first, and consumers and regulators need to be persuaded to trust blockchain–two prerequisites that give financial heavyweights an enormous early edge. So the founders decided to bet the company’s future on working with the big boys.

Smart move. In May Nasdaq became the first established financial services company to announce a real-life test using Bitcoin technology. With as its partner, Nasdaq plans to go live in November with blockchain trading of shares of pre-IPO private companies like Uber and Airbnb. But that’s only the beginning. Says Nasdaq Chief Executive Bob Greifeld, “[Blockchain] is the biggest opportunity set we can think of over the next decade or so.”

Nasdaq is hardly alone. Citigroup, Visa, Barclays , Bank of New York Mellon and UBS are among those moving to test blockchain technology. Meanwhile, the New York Stock Exchange, Goldman Sachs, USAA and BBVA have all invested in Bitcoin startups. Their dollars have helped make Bitcoin among the fastest-growing areas of startup investment, with $375.4 million committed in just the first half of 2015, compared with $339.4 million for all of 2014, CB Insights reports.

Funding their own disruption? Absolutely. But also brilliant. The big financial companies need to be involved early to identify where blockchain will eat their profits, says Eric Piscini, a principal in banking technology at Deloitte. And then, he adds, they must decide, “Should I cannibalize myself instead of waiting for someone else to cannibalize me?”

OK the Internet of Things (IoT) has been a promise for a while but this should shock most people’s imagination to something beyond an Internet of appliances.
This sensor is the size of a speck of dust but its possibilities are huge
Pennies might cost more to make than they are actually worth, but a new super-small processor chip that costs just one cent might finally make the penny worth something again.

The nanoCloudProcessor, developed by Vancouver, British Columbia-based technology company Epic Semiconductors, is no bigger than a speck of dust—it's less than one square millimeter in size—and will cost just one penny. The external sensor will be well worth the price, as the peripheral chip can measure just about anything you throw at it.
The battery-free sensor can derive power from a variety of conductive surfaces, from polymers to rubber and even human skin. Wireless energy harvesting allows it to suck up power even when its 12 inches away from an energy source.

As it powers itself, the low-consumption chip will be able to sense human action with three-dimensional, multi-step gestures performed at up to two feet away.

The nanoCloudProcessor will also be fully connected, capable of communicating with other microchip-equipped devices like smartphones, smartwatches, computers, cars, home appliances, game consoles, and medical devices.

Potential applications for the nanoCloudProcessor are extensive and widely varied. Epic Semiconductors is positioning it as a do-all sensor for the Internet of Things. The company suggests it can do everything from check the freshness of food inside a refrigerator to sense a potential allergen that may be harmful to a consumer.
For a picture of the sensor:

Here’s some promising developments using 3D open source for more affordable and customized prosthetics. There’s a 3 min video that is really a must view.
A Look At Open Bionics’ 3D-Printed Robotic Hands For Amputees
Open Bionics, a startup out of the U.K. that makes bionic hands, figured out how to dramatically lower the cost of prosthetics using a combination of open-sourced 3D printing software and robotic sensors.

Nearly 2 million people live with the loss of a limb in the U.S., according to Amputee Coalition. The most common amputation is partial hand or arm loss.

But the problem is even more prevalent worldwide and in developing countries that don’t often have access to expensive prosthetics. Hospital-grade myoelectric hands and limbs can cost up to $100,000 in some cases.

Open Bionics can produce its robotic hands in a matter of days and for a few thousand dollars. These hands are just as functional as more expensive prosthetics, but with a lighter, custom-fitted design so they are comfortable for the wearer, too.

Here’s something still in progress but with serious promise combining 3D printing and solar energy - it a 3 min video.
Australia's Largest Solar Cells
Melbourne researchers have successfully created Australia's largest and most flexible solar cells, with the aid of a new printer located at CSIRO.

The breakthrough may have far reaching applications, including for public lighting, outdoor signage and mobile power generation.

The scientists are part of a collaboration between research and industry called the Victorian Organic Solar Cell Consortium (VICOSC).

This is cool - I hope more are built.
The Largest Air Purifier Ever Built Sucks Up Smog And Turns It Into Gem Stones
What’s 23 feet tall, eats smog, and makes jewelry for fun?
In Rotterdam this week, the designer Daan Roosegaarde is showing off the result of three years of research and development: The largest air purifier ever built. It’s a tower that scrubs the pollution from more than 30,000 cubic meters of air per hour—and then condenses those fine particles of smog into tiny “gemstones” that can be embedded in rings, cufflinks, and more.

Each stone is roughly equivalent to cleaning 1,000 cubic meters of air—so you’re literally wearing the pollution that once hung in the air around Roosegaarde’s so-called Smog Free Tower. In the designer’s words, buying a ring means “you donate a thousand cubic meters of clean air to the city where the Smog Free Tower is.”

The project has been in the offing for a long time. We wrote about the idea more than two years ago when the Dutch designer first publicly announced the project, which was originally planned for Beijing after the city’s mayor endorsed the idea. Roosegaarde and his team have spent the past few years developing the first prototype in Rotterdam, where it was unveiled this month. “It’s really weird that we accept [pollution] as something normal, and take it for granted,” Roosegaarde explains.

No comments:

Post a Comment