This is how you change MIT. Change the world, MIT will catch up to it.
Neil Gershenfeld - Digital Reality
Say “leadership” and you invoke the image of an individual—at the limit, the great white knight riding in on a great white horse to save us all (even if headed into a black hole). Everyone else is a follower. Even if the intention of the leadership is to empower other people, its effect can be to disempower them. Do we really want a world of followers?
Think of the organizations you most admire. I’ll bet that front and center is a powerful sense of community. To use a phrase that I cannot repeat too often, effective organizations are communities of human beings, not collections of human resources.
How can you recognize communityship? That’s easy. You have found it when you walk into an organization and are struck by the energy in the place, the personal commitment of the people and their collective engagement in what they are doing. These people don’t have to be formally empowered because they are naturally engaged. The organization respects them so they respect it. They don’t live in mortal fear of being fired en mass because some “leader” hasn’t made his or her numbers. Imagine an economy made up of such organizations.
Sure we need leadership, especially to establish communityship in a new organization and to help sustain it in an established organization. What we don’t need is this obsession with leadership—of the individual singled out from the rest, as if he or she is the end all and be all of the organization. So here’s to less leadership, or perhaps better put, just enough leadership, embedded in communityship.
There is a famous line in a Molière play spoken by a character who discovers that he has been speaking prose all his life. Well, it’s time for us to discover that the best of our organizations have been living communityship all their lives.
Henry Mintzberg - Enough Leadership. Time for Communityship.
“I would (and do) seriously argue that especially in the face of complexity you absolutely have to have an open and collaborative development process, exactly because it’s the only thing that scales. However, it’s not enough to be open and collaborative – it needs to be distributed as well. And by ‘distributed’, I mean the massive parallel kind where everybody can replicate the whole thing.
Linus Torvalds, the originator of Linux -quoted in
http://www.optimumonline.ca/article.phtml?id=484&page=4
...Larry Page. From birth in 1973, Larry Page was incubated in the growth light of Moore’s Law. His father and mother were computer scientists. He grew up on Michigan college campuses, never far from a computer center. He took for granted the dizzying gains in computation that would come. So he did not think twice about proposing schemes that exploited the effects of Moore’s Law, especially the big idea he had as a Stanford graduate student of dramatically improving search by taking advantage of the links of the World Wide Web.
Page has outlined what might be known as his own variation on Moore:
Huge acceleration in computer power and memory
+ rapid drop in cost of same
= no excuse for pursuing wildly ambitious goals
Companies that develop products for the world in its present state are doomed for failure… Successful products are created to take advantage of tools and infrastructure of the future.
How Moore’s Law Made Google Possible
Here is a great must view 22min TED Talk for anyone interested in how a company can thrive for decades while being different.
Ricardo Semler: Radical wisdom for a company, a school, a life
What if your job didn’t control your life? Brazilian CEO Ricardo Semler practices a radical form of corporate democracy, rethinking everything from board meetings to how workers report their vacation days (they don’t have to). It’s a vision that rewards the wisdom of workers, promotes work-life balance — and leads to some deep insight on what work, and life, is really all about. Bonus question: What if schools were like this too?
Two decades after transforming a struggling equipment supplier into a radically democratic and resilient (and successful) company, Ricardo Semler wants organizations to become wise
This next article is longish - but worth the read - for anyone interested in an attempt to manage an new organization based on Holocracy. Medium is a new ‘blogging’ publishing platform being developed by the founders of Twitter.
How Medium Is Building a New Kind of Company with No Managers
Medium adopted the Holacracy model about a year ago. Calling it “hands down, by far the best way I know or have ever seen to structure and run a company,” Stirman says. He’s especially drawn to the strategy’s crystal clear minimalism and logic. “It’s basically an operating system for your organization, so the engineer in me loves it. In fact the Holacracy organization just released 4.0 of its constitution, so our company is upgrading — just like you would update to a new iOS.”
Here are some of the key tenets that Medium embraces:
- No people managers. Maximum autonomy.
- Organic expansion. When a job gets too big, hire another person.
- Tension resolution. Identify issues people are facing, write them down, and resolve them systematically.
- Make everything explicit — from vacation policies to decision makers in each area.
- Distribute decision-making power and discourage consensus seeking.
- Eliminate all the extraneous factors that worry people so they can focus on work.
“The structure is totally built around the work the company needs to achieve its purpose,” Stirman explains. “We don’t have a hierarchy of people, we have a hierarchy of circles.” For examples, the Reading and Discovery (or RAD) circle, containing roles dedicated to the site’s reading experience, is nested inside the Product Development circle, as is the Creation and Feedback circle, which is all about the content creation process. In this instance, the Product Development circle can review the results coming out of the nested circles to steer the product in a particular direction. Every member of a circle has a purpose that connects to the broader circle’s purpose, which connects to the company’s purpose, so everyone is always pulling toward the same promised land.
Here’s an old post from last year - but well worth the read for anyone interested in the means to establish better foresight.
The Inevitability of Predicting the Future
For years, Ramakrishnan, a professor at Virginia Tech, and his team have been sifting through tweets, blog posts, and news articles about Latin America, keeping a close eye on events in ten countries, including Venezuela. These past couple of months have been no different. But Ramakrishnan and his colleagues haven’t been bent over newspapers or straining their eyes scanning streams of tweets. Rather, they were monitoring the dashboard of EMBERS, their computer program that draws on tweets, news articles, and more to predict the future.
Lopez’s #LaSalida tweet was probably among those which EMBERS analyzed, and the meaning of its uncoded message was almost certainly clear to the sophisticated system. But by that point, EMBERS had already suggested to its operators that Venezuela was ripe for civil unrest. It had also done the same for Brazil many months earlier, accurately predicting the June 2013 demonstrations against rising transit fares.
EMBERS is the result of years’ worth of work by Ramakrishnan and his team, which includes computer scientists, statisticians, political scientists, social scientists, and an epidemiologist. It is the winning entrant in the Open Source Initiative at the Intelligence Advanced Research Projects Activity, a part of the Office of the Director of National Intelligence. IARPA, according to its website, “invests in high-risk, high-payoff research programs that have the potential to provide the United States with an overwhelming intelligence advantage over future adversaries.” The ability to accurately forecast civil unrest, epidemics, and elections around the world could do exactly that.
To create EMBERS, dozens of scientists across a handful of disciplines developed algorithms to scrutinize Twitter’s firehose of information, unravel various dialects of Spanish, Portuguese, and French, tally reservation cancellations on OpenTable, and count cars in satellite images of hospital parking lots. None of the data sets they use are classified, though some of them cost money to access. The team has spent two years fine tuning the algorithms and checking their forecasts against reports assembled by a third party.
Here’s an interesting article on the future of computational paradigms. Well worth the read.
Moore’s Law Is About to Get Weird
Never mind tablet computers. Wait till you see bubbles and slime mold.
In the nearly 70 years since the first modern digital computer was built, the above specs have become all but synonymous with computing. But they need not be. A computer is defined not by a particular set of hardware, but by being able to take information as input; to change, or “process,” the information in some controllable way; and to deliver new information as output. This information and the hardware that processes it can take an almost endless variety of physical forms. Over nearly two centuries, scientists and engineers have experimented with designs that use mechanical gears, chemical reactions, fluid flows, light, DNA, living cells, and synthetic cells.
Such now-unconventional means of computation collectively form the intuitively named realm of, well, unconventional computing. One expert has defined it as the study of “things which are already well forgotten or not discovered yet.” It is thus a field both anachronistic and ahead of its time.
But given the astounding success of conventional computing, which is now supported by a massive manufacturing industry, why study unconventional computing techniques at all? The answer, researchers say, is that one or more of these techniques could become conventional, in the not-so-distant future. Moore’s Law, which states that the number of transistors that can be squeezed onto a semiconductor chip of a given size doubles roughly every two years, has held true since the mid 1960s, but past progress is no guarantee of future success: Further attempts at miniaturization will soon run into the hard barrier of quantum physics, as transistors get so small they can no longer be made out of conventional materials. At that point, which could be no more than a decade away, new ideas will be needed.
So which unconventional technique will run our computers, phones, cars, and washing machines in the future? Here are a few possibilities.
Here is an important piece about the future of the Internet and the power of metaphors to shape how we reason about things. If we are not careful about how a metaphor ‘primes’ the structure of our reasoning we can be constrained into making decisions shaped by the makers of meme-metaphors.
The Future of the Internet Might Hinge on This Bet
Some big non-profit foundations are joining forces to address some scary aspects of the Net. They just might make a difference
Humans are driven by metaphors. We can’t help it. “Internet access is like electricity,” we say, and that leads to a host of other mental images: standard plugs for a wealth of devices, warm light against a dark frozen landscape, the burdens of life made more bearable. The warring metaphor now is “the Internet is the new TV,” thoroughly managed, channelized, bent on entertainment, ad-driven, interactive only when it suits someone’s business plan. Both of these metaphors are limited and not quite right. That’s the way metaphors work.
But we are in fact ants on a wrinkle of history, trundling across a vast transforming landscape that we can’t see on our own.
We’re in an important moment at the beginning of the beginning of the Internet. The opportunities presented by a phase change in informational capacity — what’s possible with fiber optic lines — and inclusive, ubiquitous connectivity are dazzling: genuine presence in other peoples’ lives, an additional layer to existence that augments and deepens human capacity for compassion and connection, a rich addition to sight. Think avatars, holodecks, and interactivity across every surface, and then multiply.
We’re not only going to need better metaphors, but we’re also going to require some serious work on making real life live up to them. “Electricity” doesn’t capture interactivity or interoperability in anything other than the crudest form. Nor does it give us a way of talking about crucial differences in capacity across socio-economic lines or the two-way mirror of data use.
This is a brilliant short article about markets and wisdom of the crowd - a parable that everyone would benefit from reading.
The parable of the ox
In 1906, the great statistician Francis Galton observed a competition to guess the weight of an ox at a country fair. Eight hundred people entered. Galton, being the kind of man he was, ran statistical tests on the numbers. He discovered that the average guess (1,197lb) was extremely close to the actual weight (1,198lb) of the ox. This story was told by James Surowiecki, in his entertaining book The Wisdom of Crowds.
Not many people know the events that followed. A few years later, the scales seemed to become less and less reliable. Repairs were expensive; but the fair organiser had a brilliant idea. Since attendees were so good at guessing the weight of an ox, it was unnecessary to repair the scales. The organiser would simply ask everyone to guess the weight, and take the average of their estimates.
A new problem emerged, however. Once weight-guessing competitions became the rage, some participants tried to cheat. They even sought privileged information from the farmer who had bred the ox. It was feared that if some people had an edge, others would be reluctant to enter the weight-guessing competition. With only a few entrants, you could not rely on the wisdom of the crowd. The process of weight discovery would be damaged….
Anyone wanting to hear the same parable as audio - can listen to this BBC podcast
I remember reading Neil Gershenfeld’s book “When Things Start to Think” sometime in 2001. It’s when I first heard of ‘smart money’ (think now of Bitcoin). This article is a MUST READ for anyone interested in understanding the future of fabrication in the digital environment - or in future education - or in the future of the digital environment. He’s an amazing thinker and here’s his latest conversation with Edge.Org - the video is 53 min.
Digital Reality
A Conversation with Neil Gershenfeld a Physicist and the Director of MIT's Center for Bits and Atoms.
What interests me is how bits and atoms relate—the boundary between digital and physical. Scientifically, it's the most exciting thing I know. It has all sorts of implications that are widely covered almost exactly backwards. Playing it out, what I thought was hard technically is proving to be pretty easy. What I didn't think was hard was the implications for the world, so a bigger piece of what I do now is that. Let's start with digital.
Digital is everywhere; digital is everything. There's a lot of hubbub about what's the next MIT, what's the next Silicon Valley, and those were all the last war. Technology is leading to very different answers. To explain that, let's go back to the science underneath it and then look at what it leads to.
Next year we're starting a new class with George Church that we've called "How to Grow Almost Anything", which is using fab labs to make bio labs and then teach biotech in it. What we're doing is we're making a new global kind of university.
This is MUST READ article - it provides some important insight concerning many issues around why we must think carefully about the regulation we’ll have to develop along with the Internet of Things.
Before we give doors and toasters sentience, we should decide what we're comfortable with first
It's becoming more and more common for everyday appliances to have features we don't expect, and the implications for privacy and freedom can be surprisingly profound. We should be sure we know what we're buying into.
What is the Internet of Things, then? Put most simply, think of it like this: in the 1990s we saw the internet appear on personal computers, and then in the 2000s we saw it move onto smartphones. The Internet of Things is the next step, as web connectivity comes to a huge range of everyday objects - and ourselves - in ways that we might recognise most clearly from science fiction. Fast internet connections mean that every slab of silicon has access to a remote cloud computer cluster that can give it functions far superior to any unconnnected brick of equivalent size - that's how Siri works on iPhones. As marvellous as that smartphone may look in the hand, it's not powerful enough to process real-time voice recognition software, but Apple's huge data centres are available just a few microseconds away across the Atlantic to do the job just as seamlessly.
Well, as author and digital rights activist Cory Doctorow explained to me when I interviewed him last year, imagine an Arab Spring-type situation in a country with very cold winters, universal Nest thermostat adoption and a dictator with no ualms about mass surveillance of web and mobile data communications. On the first day of a mass uprising the security services can stick fake mobile signal towers up around the public square of the capital and hoover up the unique identification addresses from the smartphones of every single protester there. (These towers exist, even here.) That night, as the temperature drops to its bitter coldest, every single protester finds their heating system remotely disabled. Hypothermia takes care of the dictator's problem.
This is not science fiction. It's entirely possible with existing technology, and only made unrealistic because that technology hasn't reached universal rates of adoption. This is the upcoming Internet of Things, if we're not careful.
Here’s a wonderful application of the digital environment that enable a social network for learning language. Short article.
A Social Network That Will Help You Learn a Language
Bernhard Niesner, from Austria, and Adrian Hilti, from Switzerland, met back in 2007 when both were pursuing their MBA in Madrid. Together, they wrote the final project for their Master’s – a project that would become the biggest social network in the world to learn languages, with over 50 million registered users coming from over 200 countries.
Niesner’s obsession was to simplify the way people learn a new language, and Busuu came as a result of that. The reasons behind the name probably derive from his knowledge of marketing techniques and the basic principle of setting differences aside from your closest competitors. The name derives from an African language originating from Cameroon. It is reportedly spoken by only eight people, so it is obviously very close to becoming extinct.
Busuu is actually very similar to Facebook, in terms of the way it works. Users sign up, send friendship invitations, and create groups to exchange text corrections, translations or simply to exchange some thoughts, as well practice with native speakers of a specific language. It should come as no surprise, of course, that Busuu is one of the languages users can learn for free on the social network, along with 12 others: English, German, Spanish, French, Italian, Portuguese, Russian, Mandarin, Japanese, Arabic, Polish,and Turkish. The social network has over 40,000 unique users every single day, which is somewhat a guarantee of activity and feedback for content submitted by the users. According to Niesner, learning online “has many advantages, like being available 24/7 and being accessible regardless of the user’s location, whether they are at home or commuting in a bus or a train.”
Here the link to Buzuu https://www.busuu.com/enc/
Here’s something quite scary - a hint at a plausible arms race and invisible code that will be used for good and ill, for gain and rent-seeking. Definitely worth the read and thinking about.
A Crypto Trick That Makes Software Nearly Impossible to Reverse-Engineer
Software reverse engineering, the art of pulling programs apart to figure out how they work, is what makes it possible for sophisticated hackers to scour code for exploitable bugs. It’s also what allows those same hackers’ dangerous malware to be deconstructed and neutered. Now a new encryption trick could make both those tasks much, much harder.
At the SyScan conference next month in Singapore, security researcher Jacob Torrey plans to present a new scheme he calls Hardened Anti-Reverse Engineering System, or HARES. Torrey’s method encrypts software code such that it’s only decrypted by the computer’s processor at the last possible moment before the code is executed. This prevents reverse engineering tools from reading the decrypted code as it’s being run. The result is tough-to-crack protection from any hacker who would pirate the software, suss out security flaws that could compromise users, and even in some cases understand its basic functions.
“This makes an application completely opaque,” says Torrey, who works as a researcher for the New York State-based security firm Assured Information Security. “It protects software algorithms from reverse engineering, and it prevents software from being mined for vulnerabilities that can be turned into exploits.”
Speaking of the new literacies of the digital environment - here’s learning two at once.
Doctor Who game helps kids to learn to code
Follow the hunt for the Orb of Fates and learn the basics of coding.
The BBC has revealed its fiendish side—tricking kids into learning how to code with a free Doctor Who game. The Doctor and the Dalek is available now on Android, iOS, and Amazon app stores, and combines a platforming adventure with an introduction to Boolean logic-based programming.
Peter Capaldi, the current incarnation of The Doctor, lends his voice to the game, which is written by series writer Phil Ford. Over three worlds and 12 levels, players will follow the hunt for the Orb of Fates, an ancient device from the Time War that unlocks the Starbane, a weapon that can wipe out whole solar systems—and is being fought over by the Daleks and Cybermen. The Doctor serves as mentor rather than main character, helping control a reprogrammed Dalek in search of the artefact.
The game was commissioned by BBC Learning, and ties into the government's new IT curriculum focusing on modern computer skills.
Here is an interesting longish article and graphic article (e.g. comicbook article).
Supernormal Stimuli: This is Your Brain on Porn, Junk Food, and the Internet
Given the rapid pace of technology, one has to wonder whether or not our brains (and bodies) have been able to keep up with all the new “stimulation” that is available.
Some research suggests that a few of the things we enjoy today would be classified as supernormal stimuli, a term evolutionary biologists use to describe any stimulus that elicits a response stronger than the stimulus for which it evolved, even if it is artificial—in other words, are sources of “super” stimulation like junk food and porn more likely to hook us into bad habits?
It is certainly a very muddy topic, but it’s a question that I believe deserves investigating.
After all, we’ve become increasingly surrounded by stimulation that wasn’t available even a few years ago, so are my mind and body really ready for Flavor Blasted Goldfish™ and never ending social media updates?
Before we get into the research, let’s summarize the concept a bit more clearly: what exactly is a supernormal stimulus?
The brilliant comic below will explain the basics, and will take you less than 2 minutes to read.
Speaking of supernormal stimulus - here’s a very interesting new research center that may be able to take us in the opposite direction. This may be part of the future of elite military training.
Flow Dojo: World's First Research and Training Center for Flow States
Overview of the world's first dedicated Flow research and training centre--the Flow Dojo.
Equal parts Cirque du Soleil and X-Games with a Quantified Self overlay to track all the data, the Flow Dojo let's us train our brains (and bodies) to find our minds.
Combining advanced kinetic equipment with bio and neurofeedback, this training centre will revolutionize how we can harness peak performance. It will be open for leadership teams and training, elite athletes, and anyone dedicated to taking their game to the next level.
And if that’s not enough here’s more on the same trajectory - even if the current results are less than expected.
More intelligence genes found that will lead to effective embryo selection
Scientists looking for the genes underlying intelligence are in for a slog. One of the largest, most rigorous genetic studies of human cognition has turned up inconclusive findings, and experts concede that they will probably need to scour the genomes of more than 1 million people to confidently identify even a small genetic influence on intelligence and other behavioural traits. The results were published in the Journal Nature.
In a 2013 study comparing the genomes of more than 126,000 people, the group identified three gene variants associated with with how many years of schooling a person had gone through or whether they had attended university. But the effect of these variants was small — each variant correlated with roughly one additional month of schooling in people who had it compared with people who did not.
In another study of 106,000 people, researchers picked out 69 gene variants most strongly linked to education level. To establish a more direct link with IQ, they cross-checked this list with genetic variants in a second sample of 24,000 people who also had taken tests of cognitive ability. Three gene variants were found to be associated with both educational attainment and higher IQ scores.
The three variants the researchers identified were each responsible for an average of 0.3 points on an IQ test. (About two-thirds of the population score between 85 and 115.) That means that a person with two copies of each variant would score 1.8 points higher on an intelligence test than a person with none of them.
To put those figures in perspective, those variants have about one-twentieth the influence on intelligence as do gene variants linked to other complex traits such as height, says Daniel Benjamin, a social scientist at Cornell University in Ithaca, New York, who co-led the study.
More progress on 3D printing and robots - this time its using a ‘community’ of small robots to print larger structures - very interesting. The graphics are very clear and there’s a 7 min video.
Minibuilders - Small robots printing big structures
Large-scale 3D printing isn't a new technology, but past experiments have required massively intensive infrastructure. Minibuilders was conceptualized as a community of three modestly sized robots which were tasked with very specific jobs that aggregated to a large scale operation.
Each robot completes its programmed job in sequence to fully construct an automated, inhabitable structure. Ultimately Minibuilders shows the capacity for robotics to have a significant impact on the architecture discipline and industry.
Here is a new 3 min video from Boston Dynamics - whom Google bought recently… we have to think what will be the state in another decade.
Introducing Spot
Spot is a four-legged robot designed for indoor and outdoor operation. It is electrically powered and hydraulically actuated. Spot has a sensor head that helps it navigate and negotiate rough terrain. Spot weighs about 160 lbs.
Now Spot may not seem very advanced, here’s another more extensive article on the looming arrival of robots everyware.
Robots are leaving the factory floor and heading for your desk – and your job
Will robots cause unemployment or create new types of jobs and increased leisure time for humans? Expert opinion is divided…
It could be said that the job of bridge toll collector was invented in San Francisco. In 1968, the Golden Gate Bridge became the world’s first major bridge to start employing people to take tolls.
But in 2013 the bridge where it all began went electronic. Of its small band of collectors, 17 people were redeployed or retired and nine found themselves out of work. It was the software that did it – a clear-cut case of what economists call technological unemployment. Licence-plate recognition technology took over. Automating jobs like that might not seem like a big deal. It is easy to see how it might happen, just as how we buy train tickets at machines or book movie tickets online reduces the need for people.
But technology can now do many more things that used to be unique to people. Rethink Robotics’ Baxter, a dexterous factory robot that can be programmed by grabbing its arms and guiding it through the motions, sells for a mere $25,000 (equivalent to about $4 an hour over a lifetime of work, according to a Stanford University study). IPsoft’s Amelia, a virtual service desk employee, is being trialled by oil industry companies, such as Shell and Baker Hughes, to help with employee training and inquiries. Meanwhile, doctors are piloting the use of Watson, IBM’s supercomputer, to assist in diagnosing patients and suggesting treatments. Law firms are using software such as that developed by Blackstone Discovery to automate legal discovery, the process of gathering evidence for a lawsuit, previously an important task of paralegals. Rio Tinto’s “mine of the future” in Western Australia has 53 autonomous trucks moving ore and big visions for expansion. Even the taxi-sharing company Uber is in on the act – it has just announced it will open a robotics research facility to work on building self-driving cars.
Speaking of digital reality - here’s a product that anticipates a fast growing market - not too far off. Some of the demo is pretty lame - but it serves to show current commercial aims.
ACCOMPANY - Integrated robot technologies for supporting elderly people in their homes
Robot companions integrated with intelligent environments can facilitate independent living of the elderly at home. Using the robotic home assistant Care-O-bot 3, the ACCOMPANY project develops new solutions for providing services in a motivating and socially acceptable manner.
This video demonstrates some use cases of the assistive technologies developed in the ACCOMPANY project within a realistic home care setting. The resident, Mrs. Taylor, is living alone with the assistance of a robot in a smart environment. The video showcases how Mrs. Taylor utilizes her companion robot for entertainment, cognitive and social assistance as well as a physical support for retrieving or transporting items. A strong focus has been put on having the robot interact in an empathic and socially acceptable way whilst facilitating re-ablement by encouraging Mrs. Taylor to stay active. Thus the robot provides motivational support rather than acting as a mere servant.
Here is a 3min video about another hospital robot - this robot movement is accelerating.
Aethon 'Tug' Hospital Robot | CNBC
Say hello to Tug, an automated delivery robot created by Aethon, that transfers medical supplies and patients' meals around hospitals.
Speaking of automation here’s something that make shock consumers of mainstream news.
The Associated Press wrote 10X more articles using robots
If you thought that journalist was not on the list of professions replaceable by robots, think again. The Associated Press, America’s oldest 24-hour news agency, produced roughly 3,000 articles on company earnings last quarter, 10 times more than it used to, by using automated technology.
According to The Verge, AP has been able to do it by partnering with Automated Insights, a company that specializes in “robot journalism.” Automated Insights uses artificial intelligence and Big Data analysis to automatically generate data-heavy articles, such as earnings reports.
Initially there was some human editing involved, but now most of the articles are fully automated — with far fewer errors than human reporters and editors. In theory, it could crank out 2,000 articles per second.
But AP says the purpose of having “robot journalists” is not about replacing its reporters, at least in the foreseeable future. Instead, it is to allow the reporters to spend more time on high-quality journalism.
...Last year, we wrote about Narrative Science, another story automation company, that claims it can do the type of deep analysis a $250,000 per year consultant would do.
This is an article (pdf downloadable) that is focused on the situation in America - however is does a nice job of listing what advanced industries are - which is applicable to any economy that wants to invest in the future of an economy where people really make a difference.
America’s Advanced industries
What they are, where they are and why they matter.
.The need for economic renewal in the United States remains urgent. Years of disappointing job growth and stagnant incomes for the majority of workers have left the nation shaken and frustrated. At the same time, astonishing new technologies—ranging from advanced robotics and “3-D printing” to the “digitization of everything”—are provoking genuine excitement even as they make it hard to see where things are going.
Hence this paper: At a critical moment, this report asserts the special importance to America’s future of what the paper calls America’s “advanced industries” sector.
Characterized by its deep involvement with technology research and development (R&D) and STEM (science, technology, engineering, and math) workers, the sector encompasses 50 industries ranging from manufacturing industries such as automaking and aerospace to energy industries such as oil and gas extraction to high-tech services such as computer software and computer system design, including for health applications.
These industries encompass the nation’s “tech” sector at its broadest and most consequential. Their dynamism is going to be a central component of any future revitalized U.S. economy. As such, these industries encompass the country’s best shot at supporting innovative, inclusive, and sustainable growth. For that reason, this report provides a wide-angle overview of the advanced industry sector that reviews its role in American prosperity, assesses key trends, and maps its metropolitan and global competitive standing before outlining high-level strategies to enhance that.
So if the advent of ‘smart-ish-enough’ robots isn’t alarming enough - here’s something that may also be progressing on a Moore’s Law trajectory.
Artificial Brain Edges Closer to Reality
Japanese researchers announced recently that they turned human embryonic stem cells into a three-dimensional structure similar to the cerebellum, the part of the brain responsible for motor movements and receiving information from our senses.
Even though the structure didn't last very long, it's another small step in building an "artificial brain," a challenge that has teams of scientists across the globe working to construct living tissue, silicon circuits and computer algorithms together into something that can perform the same tasks as our own living gray matter.
Some projects are massive. The European Union's decade-long $1.2 billion "Human Brain Project" includes 183 principal investigators from 24 countries whose stated goal is to simulate and map the human brain, develop brain-inspired computer technologies and explore human brain diseases. But last year, the HPB got mired in a dispute between neuroscientists and computer scientists about whether more of the work should be done on powerful supercomputers rather than medical labs. Now, the entire project is undergoing an external review.
Here’s something about both Big Data and Microbial ecologies in urban landscapes - Very Interesting. I can imagine in the next decade - that we’ll not only be implementing a genetic census of our human populations - but we’ll be regularly monitoring the microbial ecologies of our work, play and living environments. One interesting question - is there more or less bubonic plague, anthrax, etc today than there was 20, 50, 70 years ago around the public infrastructure?
Big Data and Bacteria: Mapping the New York Subway’s DNA
Scientists in 18-Month Project Gather DNA Throughout Transit System to Identify Germs, Study Urban Microbiology
Aboard a No. 6 local train in Manhattan, Weill Cornell researcher Christopher Mason patiently rubbed a nylon swab back and forth along a metal handrail, collecting DNA in an effort to identify the bacteria in the New York City subway.
In 18 months of scouring the entire system, he has found germs that can cause bubonic plague uptown, meningitis in midtown, stomach trouble in the financial district and antibiotic-resistant infections throughout the boroughs.
Frequently, he and his team also found bacteria that keep the city livable, by sopping up hazardous chemicals or digesting toxic waste. They could even track the trail of bacteria created by the city’s taste for pizza—identifying microbes associated with cheese and sausage at scores of subway stops.
The big-data project, the first genetic profile of a metropolitan transit system, is in many ways “a mirror of the people themselves who ride the subway,” said Dr. Mason, a geneticist at the Weill Cornell Medical College.
It is also a revealing glimpse into the future of public health.
Across the country, researchers are combining microbiology, genomics and population genetics on a massive scale to identify the micro-organisms in the buildings and confined spaces of entire cities.
Science and fiction are both strange - here’s something that fiction visioned decades ago and that we all recognize.
Scanadu: The medical Tricorder from Star Trek is here
It's called "Scanadu Scout" -- after Xanadu, an ancient city of great splendor and scientific progress, made famous by English poet S. T. Coleridge -- and the greatest thing about it is that it's not a design concept, nor a million-dollar prototype, but an actual product. After a successful crowdfunding round via Indiegogo, the Scanadu has began shipping to backers at the end of January.
It is a tiny, round and rigorously white device -- even though a black version is in the plans -- and it works by placing it on one's forehead.
Through its sensor, and in a matter of seconds, the Scanadu measures heart rate, temperature, blood pressure, oxygen level and provides a complete ECG reading.
Here’s their website https://www.scanadu.com/scout
Now here a fabulous 18min TED Talk that is well worth the listen - some ground breaking research on human emotion and possible treatment of depression.
The science of emotions: Jaak Panksepp
Given an inherent subjective nature, emotions have long been a nearly impenetrable topic for scientific research. Affective neuroscientist Jaak Panksepp explains a modern approach to emotions, and how taking seriously the emotions of other animals might soon improve the lives of millions.
Jaak Panksepp introduced the concept of Affective Neuroscience in 1990, consisting of an overarching vision of how mammalian brains generate experienced affective states in animals, as effective models for fathoming the primal evolutionary sources of emotional feelings in human beings. This work has implications for further developments in Biological Psychiatry, ranging from an understanding of the underlying brain disorders, to new therapeutic strategies. Panksepp is a Ph.D. Professor and Baily Endowed Chair of Animal Well-Being Science, College of Veterinary Medicine, Washington State University. His scientific contributions include more than 400 papers devoted to the study of basic emotional and motivational processes of the mammalian brain. He has conducted extensive research on brain and bodily mechanisms of feeding and energy-balance regulation, sleep physiology, and most importantly the study of emotional processes, including associated feelings states, in other animals.
For Fun and Creativity
This is an interesting approach to participatory gaming. The video is on 2min and is lovely.
Elegy for a Dead World -- A Game About Writing
In Elegy for a Dead World, you travel to distant planets and create stories about the people who once lived there.
Three portals have opened to uncharted worlds. Earth has sent a team of explorers to investigate them, but after an accident, you are the sole survivor. Your mission remains the same: survey these worlds and write the only accounts of them that outsiders will ever know.