Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.) that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.
Many thanks to those who enjoy this. ☺
In the 21st Century curiosity will SKILL the cat.
“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9
In dramatic statement, European leaders call for ‘immediate’ open access to all scientific papers by 2020
Samsung and SK Telecom to build World-First Nationwide LoRaWAN Network Dedicated to Internet of ThingsNumber of metal bands per capita in Europe
The basic rules of the game for creating and capturing economic value were once fixed in place. For years, or even decades, companies pursued the same old business models (usually selling goods or services, building and renting assets and land, and offering people’s time as services) and tried to execute better than their competitors did. But now, business model disruption is changing the very nature of economic returns and industry definitions. All industries are seeing rapid displacement, disruption, and, in extreme cases, outright destruction. The financial services industry, with its large commercial and investment banks and money managers, is no exception.
...Many of these organizations are in the lending business, but are using big data and cloud technologies rather than tellers and branches to speed lending and customer acquisition. Others are leveraging network business models, such as peer-to-peer lending, to bring together would-be lenders and borrowers. According to Dimon, “We are going to work hard to make our services as seamless and competitive as theirs.” His underlying thought is this: If his company doesn’t keep pace with today’s well-capitalized upstarts, they will begin to lose relevance in a platform-centric world.
“In lots of areas, it looks like the blockchain will replace the current centralized business model of the financial services industry.”
There are many innovative, network business models that are coming after traditional financial services and banking organizations, and big banks are beginning to realize they must evolve in response if they want to remain viable in a digitally centric world — whether it comes by acquiring, partnering or developing leading-edge technologies. But what’s less clear is why, exactly, these new entrants are so disruptive and powerful. What enables them to skirt perceived constraints of these once ‘too large to fail’ incumbents and exploit unseen possibilities? In short, it is network-centered thinking with platform-based business models.
The issues discussed in this article and the others referred to are important for the future our social-economies and our political systems. This could also be considered another weak signal of the looming transformation of our social-political-economies. Although the research concerns American situations - the significance is important for all of us.
In 2014 we published a study of political inequality in America, called “Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens.” Our central finding was this: Economic elites and interest groups can shape U.S. government policy — but Americans who are less well off have essentially no influence over what their government does. This was in line with a good deal of previous research by Larry Bartels, Martin Gilens,Larry Jacobs and Benjamin Page, Elizabeth Rigby and Gerald Wright, and others. But for some reason, our paper caught the media’s attention in a way that few academic journal articles do.
Since then, a number of questions and criticisms have been raised about our work — some offering sensible critiques and alternative perspectives and others simply mistaken. We have responded in print to some of these, and will list some of those responses at the end of this post. Here we will respond briefly to the most important challenges to our research. In brief, we don’t believe that any of these critiques, individually or collectively, undermine our central claims.
Here’s a 3 min read by Lawrence Lessig discussing another ‘weak signal’ related to the future of the governance of our social-political-economies.
In the history of constitutions across the world, America has had a unique place: Ours was the first constitution ratified by the people in convention. But Iceland has now done something much more significant: For the first time in the history of the world, and using a technology only possible in the 21st century, the people of a nation have crafted their own constitution through an open and inclusive crowd-sourcing process. Yet astonishingly, that constitution remains unenforced.
As everyone in [Iceland] knows, after the financial disasters of 2008, the citizens of Iceland began a process to claim back their own sovereignty. Building on the values identified by 1,000 randomly selected citizens, Icelanders launched a process to crowdsource a new constitution. That initiative was then ratified when the Parliament established a procedure for selecting delegates to a drafting commission. More than 500 citizens ran to serve on that 25 person commission. Over four months, the commissioners met to draft a constitution, with their work made available for public comment throughout the process. More than 3600 comments were offered by the public, leading to scores of modifications. The final draft, adopted unanimously, was then sent to the parliament and to the people. More than 2/3ds of voters endorsed the document in a non-binding referendum as the basis of a new constitution.
Never in the history of constitutionalism has anything like this ever been done. If democracy is rule by the people — if the sovereignty of a democratic nation is ultimately the people — then this process and the constitution it produced is as authentic and binding as any in the world. Yet the parliament of Iceland has refused to allow this constitution to go into effect. And the question that anyone in the movements for democracy across the world must ask is just this: By what right?
New forms of governance - as part of our challenge to re-imagine everything - also pertains - maybe especially - to how we govern science and scientific research results. This is welcomed news for all of us who seek to know more.
In dramatic statement, European leaders call for ‘immediate’ open access to all scientific papers by 2020
In what European science chief Carlos Moedas calls a "life-changing" move, European Union member states today agreed on an ambitious new open access (OA) target. All scientific papers should be freely available by 2020, the Competitiveness Council—a gathering of ministers of science, innovation, trade, and industry—concluded after a 2-day meeting in Brussels. But some observers are warning that the goal will be difficult to achieve.
The OA goal is part of a broader set of recommendations in support of Open Science, a concept which also includes improved storage of and access to research data. The Dutch government, which currently holds the rotating E.U. presidency, had lobbied hard for Europe-wide support for Open Science, as had Carlos Moedas, the European Commissioner for Research and Innovation.
"We probably don't realize it yet, but what the Dutch presidency has achieved is just unique and huge," Moedas said at a press conference. "The commission is totally committed to help move this forward."
"The time for talking about Open Access is now past. With these agreements, we are going to achieve it in practice," the Dutch State Secretary for Education, Culture, and Science, Sander Dekker, added in a statement.
This call for open access is already underway - in addition to Wikileaks, Snowden, Manning, the Panama Papers - here’s someone else who making a huge impact on the world by making knowledge open to all. It is time to re-imagine how science does peer-review and makes research available to all. Perhaps a Wiki-review foundation - like Wikipedia would make peer-review itself more transparent.
Elbakyan also answered nearly every question I had about her operation of the website, interaction with users, and even her personal life. Among the few things she would not disclose is her current location, because she is at risk of financial ruin, extradition, and imprisonment because of a lawsuit launched by Elsevier last year.
...increasing numbers, researchers around the world are turning to Sci-Hub, which hosts 50 million papers and counting. Over the 6 months leading up to March, Sci-Hub served up 28 million documents. More than 2.6 million download requests came from Iran, 3.4 million from India, and 4.4 million from China. The papers cover every scientific topic, from obscure physics experiments published decades ago to the latest breakthroughs in biotechnology. The publisher with the most requested Sci-Hub articles? It is Elsevier by a long shot—Sci-Hub provided half-a-million downloads of Elsevier papers in one recent week.
These statistics are based on extensive server log data supplied by Alexandra Elbakyan, the neuroscientist who created Sci-Hub in 2011 as a 22-year-old graduate student in Kazakhstan. I asked her for the data because, in spite of the flurry of polarized opinion pieces, blog posts, and tweets about Sci-Hub and what effect it has on research and academic publishing, some of the most basic questions remain unanswered: Who are Sci-Hub’s users, where are they, and what are they reading?
For someone denounced as a criminal by powerful corporations and scholarly societies, Elbakyan was surprisingly forthcoming and transparent. After establishing contact through an encrypted chat system, she worked with me over the course of several weeks to create a data set for public release: every download event over the 6-month period starting 1 September 2015, including the digital object identifier (DOI) for every paper. To protect the privacy of Sci-Hub users, we agreed that she would first aggregate users’ geographic locations to the nearest city using data from Google Maps; no identifying internet protocol (IP) addresses were given to me. (The data set and details on how it was analyzed are freely accessible)
The talk about the emerging Internet-of-Things continues - and now it has a prototype being implemented in Korea - who is a world leader in terms of access to the Internet and it’s infrastructure. This is well worth the read (and a 5 min video) - something every society will have to decide how best to support and enable.
The sheer volume of data created by the IoT will have unfathomable impact on the networking systems used today. Deep analytics will require distributed datacenters and real-time response to events. Fast, agile networks are crucial to enable the real-time analysis of sensor data. Given these requirements, it is very unlikely that today's networks will stand up to the demands of 2020.
Samsung and SK Telecom to build World-First Nationwide LoRaWAN Network Dedicated to Internet of Things
Samsung Electronics today announced a new contract with SK Telecom to deploy the world’s first commercial Internet of Things (IoT)-dedicated nationwide LoRaWAN network.
The network will be deployed across Korea using the 900 MHz frequency band. The commercial service is scheduled to launch in Daegu, Korea’s fourth largest city, next month and will be available nationwide by the middle of this year.
LoRaWAN is designed to provide Low Power Wide Area Network with features specifically needed to support low-cost, mobile, secure bi-directional communication for Internet of Things (IoT), machine-to-machine (M2M), and smart city, and industrial applications. It is optimized for low power consumption and to support large networks with millions and millions of devices. It has innovative features, support redundant operation, location, low-cost, low-power and can even run on energy harvesting technologies enabling the mobility and ease of use to Internet of Things.
LoRaWAN network architecture is typically laid out in a star-of-stars topology in which gateways is a transparent bridge relaying messages between end-devices and a central network server in the backend. Gateways are connected to the network server via standard IP connections while end-devices use single-hop wireless communication to one or many gateways. All end-point communication is generally bi-directional, but also supports operation such as multicast enabling software upgrade over the air or other mass distribution messages to reduce the on air communication time.
For those interested in the continuing evolution of Blockchain technologies this is an interesting milestone for Ethereum - worth the read. While this technology is and will be disruptive - but the human dimension still requires a lot of consideration - there can be no ‘trustless’ system.
At 9 a.m. GMT this morning, funding closed on an entity called The DAO. It’s a blockchain-enabled financial vehicle that’s structured kind of like a cross between Kickstarter and a venture capital fund and which now runs autonomously—no humans needed—on the fledgling Ethereum network. The DAO (short for decentralized autonomous organization) raised over US $150 million worth of the bitcoin-like cryptocurrency, Ether, during a feverish, 27-day sale.
The DAO’s launch is feat that should surely stand out as a feather in the cap for the Ethereum network, as it is the most successful crowdfunding campaign yet documented anywhere, ever.
But yesterday, just hours before The DAO was scheduled to open for business and begin taking project proposals, three blockchain researchers published an article outlining multiple flaws in the governance structure of the organization that they say could be used as vectors for attack. The researchers are asking everyone involved with The DAO to temporarily halt funding activities and fix the critical problems.
McLuhan noted that the computer collapses history into the moment - here is one more development pointing to the looming phase transition in our concepts of time-space and privacy.
Last December the algorithm developed by NTechLab edged out Google and many other entrants in the University of Washington’s MegaFace face recognition challenge. Presented with a large set —500,000 images of more than 20,000 users— the NTechLab program had a success rate of around 73 percent, Google’s FaceNet program had one just above 70 percent.
Last month, a Russian software developer named Andrey Mima announced that he had been able to track down two women he’d taken a photo of in 2010, with nothing more than their faces. Mima posted on Vkontakte, Russia’s equivalent of Facebook, that he’d used a facial recognition app called FindFace. All he had to do was input the photo and FindFace’s face recognition software did the rest, scanning publicly available Vkontakte photos and finding the two women, so he was finally able to send them the photo he’d taken.
At the end of his post, Mima briefly noted that the service’s implications for privacy were a little alarming. Overall, though, he was writing to recommend FindFace and praise its effectiveness.
At that point FaceFind was about a month old, and in the intervening period the service has proved to be what it sounds like: an unmitigated privacy disaster.
We think of robotics and artificial intelligence -and maybe even artificial emotional intelligence (given the capacity to recognize faces and the emotions on the face) - but what about senses? We have to think of where this will be in the next decade or two. The two very short video indicate just how sensitive to other presence robots are getting.
One of the most useful things about robots is that they don’t feel pain. Because of this, we have no problem putting them to work in dangerous environments or having them perform tasks that range between slightly unpleasant and definitely fatal to a human. And yet, a pair of German researchers believes that, in some cases, feeling and reacting to pain might be a good capability for robots to have.
The researchers, from Leibniz University of Hannover, are developing an “artificial robot nervous system to teach robots how to feel pain” and quickly respond in order to avoid potential damage to their motors, gears, and electronics. They described the project last week at the IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden, and we were there to ask them what in the name of Asimov they were thinking when they came up with this concept.
Why is it a good idea for robots to feel pain? The same reason why it’s a good idea for humans to feel pain, said Johannes Kuehn, one of the researchers. “Pain is a system that protects us,” he told us. “When we evade from the source of pain, it helps us not get hurt.” Humans that don’t have the ability to feel pain get injured far more often, because their bodies don’t instinctively react to things that hurt them.
This is an interesting short article with another promise of how drones can enable some massive innovation.
"The only way we're going to take on these age-old problems is with techniques that weren't available to us before," CEO and former Nasa-engineer Lauren Fletcher said. "By using this approach we can meet the scale of the problem out there."
The system could be a serious boost for the planet's forests
A drone start-up is going to counter industrial scale deforestation using industrial scale reforestation.
BioCarbon Engineering wants to use drones for good, using the technology to seed up to one billion trees a year, all without having to set foot on the ground.
26 billion trees are currently being burned down every year while only 15 billion are replanted. If successful, the initiative could help address this shortfall in a big way.
Drones should streamline reforestation considerably, with hand-planting being slow and expensive.
This article discusses the nature of species-environment transformation. Interesting and humorous at the same time.
Grolar vs. pizzly bear
Grizzly-polar bear hybrids, the existence of which was confirmed 10 years ago, are once again in the news after one was shot by a hunter in a remote territory of Canada. The bear has white fur, brown paws, and a grizzly-shaped head. Climate change has encouraged the two species to mingle more often recently, prodding polar bears to roam farther from their shrinking Arctic territory while grizzlies in Canada and Alaska move north.
Climate change is one strange matchmaker. Warmer temperatures have led to shifting habitats and shifting mating habits. And occasionally, when two bears collide, the result is neither grizzly nor polar, but pizzly.
Or should I say grolar? In the coming years, we’ll face more naming conundrums like this one. A 2015 Nature study found that by the end of the century, 6 percent of species worldwide are likely to come into contact with new species to potentially reproduce with. Hybrids are an ordinary part of the evolutionary process (even some humans are part Neanderthal), but interbreeding isn’t always advantageous. Washington Post reports that as grizzlies and grolars encroach on dwindling polar territory, they may accelerate the decline of their purebred cousins.
In the Arctic, where icy habitats are melting together, even more weird stuff is going down. Introducing some hybrids brought to life by the Cupid’s arrow of climate change — and some speculation as to what we’ll end up calling these critters:
This may be a true game-changer - heralding a deep change in the conditions of change - in our domestication of DNA and - how we come to know about life - literally. MUST READ.
This week I beat an invading virus, copied all my DNA, and split in two. #blessed #yolo #celllife
What would our cells say if they could blog? We’ll soon know – the CRISPR gene editing technique has been adapted to make cells keep a log of what happens to them, written inside their own DNA.
Such CRISPR-based logging could have a huge range of uses, from smart cells that monitor our health from within, to helping us understand exactly how our bodies develop and grow.
This exciting technology could record the biography of a cell, says synthetic biologist Darren Nesbeth of University College London, who was not involved in the work.
For example, therapeutic immune cells could be engineered to patrol a person’s body, recording what they see and reporting back to clinicians when they are recaptured. “That’s just one of many possible examples,” says Nesbeth.
CRISPR-based logging was developed by Timothy Lu and his colleagues at the Massachusetts Institute of Technology. They designed a system where CRISPR can be activated within a cell whenever it experiences a particular event – for example, exposure to a particular chemical.
When this happens, CRISPR generates mutations in a specific region of DNA, effectively leaving a mark in a log to record the event. Analysing how many mutations there are in a cell’s log then reveals roughly how many of these events have occurred.
For a more detailed account of the cutting edge of this work in synthetic biology here is a 48 min video presentation by Timothy Lu
In his iBiology talk, Dr. Timothy Lu describes how biological circuits, using principles from engineering, can be used as digital (all or none) or analog (continuous spectrum) sensors, and can be programmed in a cell to ‘remember’ an input and pass this memory to the cell’s offspring after it divides. Dr. Lu gives several examples of biological circuits that his lab created that can allow a cell to sense the extracellular environment and give a readable output that can be maintained through subsequent cell divisions. In the future, these types of circuits can be developed as non-invasive diagnostics or therapeutics in humans. Dr. Lu ends his talk by discussing the open challenges facing this area of synthetic biology.
While these products may not be artificial life - this is another domain in the domestication of DNA - for the production of new materials, parts and other products. The graphics alone are worth the view.
“The paper turns the problem around from one in which an expert designs the DNA needed to synthesize the object, to one in which the object itself is the starting point, with the DNA sequences that are needed automatically defined by the algorithm,” Bathe says. “Our hope is that this automation significantly broadens participation of others in the use of this powerful molecular design paradigm.”
Like 3-D printing did for larger objects, method makes it easy to build nanoparticles out of DNA.
Researchers can build complex, nanometer-scale structures of almost any shape and form, using strands of DNA. But these particles must be designed by hand, in a complex and laborious process.
This has limited the technique, known as DNA origami, to just a small group of experts in the field.
Now a team of researchers at MIT and elsewhere has developed an algorithm that can build these DNA nanoparticles automatically.
In this way the algorithm, which is reported together with a novel synthesis approach in the journal Science this week, could allow the technique to be used to develop nanoparticles for a much broader range of applications, including scaffolds for vaccines, carriers for gene editing tools, and in archival memory storage.
Unlike traditional DNA origami, in which the structure is built up manually by hand, the algorithm starts with a simple, 3-D geometric representation of the final shape of the object, and then decides how it should be assembled from DNA, according to Mark Bathe, an associate professor of biological engineering at MIT, who led the research.
This article refers to the domestication of DNA via transforming bacteria into manufacturing engines.
A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at the Department of Energy’s Oak Ridge National Laboratory.
While zinc sulfide nanoparticles – a type of quantum dot that is a semiconductor – have many potential applications, high cost and limited availability have been obstacles to their widespread use. That could change, however, because of a scalable ORNL technique outlined in a paper published in Applied Microbiology and Biotechnology.
Unlike conventional inorganic approaches that use expensive precursors, toxic chemicals, high temperatures and high pressures, a team led by ORNL’s Ji-Won Moon used bacteria fed by inexpensive sugar at a temperature of 150 degrees Fahrenheit in 25- and 250-gallon reactors. Ultimately, the team produced about three-fourths of a pound of zinc sulfide nanoparticles – without process optimization, leaving room for even higher yields.
The ORNL biomanufacturing technique is based on a platform technology that can also produce nanometer-size semiconducting materials as well as magnetic, photovoltaic, catalytic and phosphor materials. Unlike most biological synthesis technologies that occur inside the cell, ORNL’s biomanufactured quantum dot synthesis occurs outside of the cells. As a result, the nanomaterials are produced as loose particles that are easy to separate through simple washing and centrifuging.
Everyone should have heard about the crisis we face concerning antibiotics - our current stock is rapidly becoming less effective. Here’s some good news - not only about new antibiotics but new methods to find them.
...the tools that Epstein and his colleagues have used to make scientific headlines. And they’re cheap. The hacked pipette tip box costs less than $10 to make. “You could build this in your garage,” he said, turning the box over in his hand.
Behind these cheap items there’s a powerful idea. Bacteria make antibiotics naturally, which means that if you can grow new bacteria in a lab, the microbes can offer up new drugs. Unfortunately, for the past century, microbiologists have failed to unlock the secret to cultivating the vast majority of bacterial species.
“Everyone thought the solution would be high-tech,” said Epstein. But the one that he and his colleagues have found is remarkably straightforward. They raise bacteria by giving them a comfortable place to grow — a disk or a box will do. A company they founded, called NovoBiotic, is now testing the antibiotics made by the bacteria in the hopes of putting them into clinical trials.
...the researchers discovered 25 different antibiotics, all of which appear to be new to science.
Lewis led the analysis of two of these new drugs, which he and his colleagues have dubbed lassomycin and teixobactin. “They’re terrifically interesting,” said Lewis. Each one kills bacteria using an attack never seen before in an antibiotic. Lassomycin drains the fuel from bacteria, while teixobactin interferes with the growth of their cell walls.
Laboratory testing with a phone - another signal of the change in medicine and patient empowerment. A 40second video is included.
Seeking to relieve the burden on clinics and primary care doctors, researchers created a urinalysis system that uses a black box and smartphone camera to analyze a standard medical dipstick.
There’s a good reason your doctor asks for a urine sample at your annual checkup. A simple, color-changing paper test, dipped into the specimen, can measure levels of glucose, blood, protein and other chemicals, which in turn can indicate evidence of kidney disease, diabetes, urinary tract infections and even signs of bladder cancer.
The simple test is powerful, but it isn’t perfect: It takes time, costs money and creates backlogs for clinics and primary care physicians. Results are often inconclusive, requiring both patient and doctor to book another appointment. Patients with long-term conditions like chronic urinary tract infections must wait for results to confirm what both patient and doctor already know before getting antibiotics. Tracking patients’ progress with multiple urine tests a day is out of the question.
Some innovators have tried to democratize urine testing by creating a low-cost way to analyze one of medicine’s trusty staples – the urinary dipstick – in any setting, even at home.
One more breakthrough in understanding the universal structure of DNA - this may be a significant contribution to a deeper understanding of synthetic life as well. What this article shows is how life itself finds affordances that enable the re-purposing of structure to new functions.
“The big problem in biology is the question of how a protein does what it does. We think the answer rests in protein evolution,”
“For the first time, we have traced evolution onto a biological network,”
“It turns out that there are little snippets in our genes that are incredibly conserved over time,” Caetano-Anollés says. “And not just in human genomes. When we look at higher organisms, such as plants, fungi and animals, as well as bacteria, archaea, and viruses, the same snippets are always there. We see them over and over again.”
says University of Illinois professor and bioinformatician Gustavo Caetano-Anollés.
A new University of Illinois study demonstrates the evolution of protein structure and function over 3.8 billion years.
Snippets of genetic code, consistent across organisms and time, direct proteins to create “loops,” or active sites that give proteins their function.
The link between structure and function in proteins can be thought of as a type of network.
Demonstrating evolution in this small-scale network may help others understand how different types of networks, such as the internet or social networks, change over time.
The full paper is here
This is amazing - from 7.7% efficiency to 99% efficiency in capturing infrared light - cheap, powerful night-vision and more is on the way.
A breakthrough by an Australian collaboration of researchers could make infra-red technology easy-to-use and cheap, potentially saving millions of dollars in defence and other areas using sensing devices, and boosting applications of technology to a host of new areas, such as agriculture.
Infra-red devices are used for improved vision through fog and for night vision and for observations not possible with visible light; high-quality detectors cost approximately $100,000 (including the device at the University of Sydney) some require cooling to -200°C.
Now, research spearheaded by researchers at the University of Sydney has demonstrated a dramatic increase in the absorption efficiency of light in a layer of semiconductor that is only a few hundred atoms thick - to almost 99 percent light absorption from the current inefficient 7.7 percent.
The findings will be published overnight in the high-impact journal Optica.
The phase transition into a new energy geo-politics of zero marginal cost energy is now blooming.
About 147 gigawatts (GW) of capacity was added in 2015, roughly equivalent to Africa's generating capacity from all sources.
When measured against a country's GDP, the biggest investors were small countries like Mauritania, Honduras, Uruguay and Jamaica.
New solar, wind and hydropower sources were added in 2015 at the fastest rate the world has yet seen, a study says.
Investments in renewables during the year were more than double the amount spent on new coal and gas-fired power plants, the Renewables Global Status Report found.
For the first time, emerging economies spent more than the rich on renewable power and fuels.
Over 8 million people are now working in renewable energy worldwide.
For a number of years, the global spend on renewables has been increasing and 2015 saw that arrive at a new peak according to the report.
On the energy front advance keep coming.
“A lot of the work thus far in this field has been proof-of-concept demonstrations,” Bierman says. “This is the first time we’ve actually put something between the sun and the PV cell to prove the efficiency” of the thermal system. Even with this relatively simple early-stage demonstration, Bierman says, “we showed that just with our own unoptimized geometry, we in fact could break the Shockley-Queisser limit.” In principle, such a system could reach efficiencies greater than that of an ideal solar cell.
System converts solar heat into usable light, increasing device’s overall efficiency.
A team of MIT researchers has for the first time demonstrated a device based on a method that enables solar cells to break through a theoretically predicted ceiling on how much sunlight they can convert into electricity.
Ever since 1961 it has been known that there is an absolute theoretical limit, called the Shockley-Queisser Limit, to how efficient traditional solar cells can be in their energy conversion. For a single-layer cell made of silicon — the type used for the vast majority of today’s solar panels — that upper limit is about 32 percent. But it has also been known that there are some possible avenues to increase that overall efficiency, such as by using multiple layers of cells, a method that is being widely studied, or by converting the sunlight first to heat before generating electrical power. It is the latter method, using devices known as solar thermophotovoltaics, or STPVs, that the team has now demonstrated.
The findings are reported this week in the journal Nature Energy, in a paper by MIT doctoral student David Bierman, professors Evelyn Wang and Marin Soljačić, and four others.
Even though this may seem a bit random … it is an important innovation.
‘Extractor’ removes predictability from computer-generated numbers
Ask a computer to pick a random number and you’ll probably get a response that isn’t completely unpredictable. Because they are deterministic automatons, computers struggle to generate numbers that are truly random. But a new advance on a method known as a randomness extractor makes it easier for machines to roll the dice, generating truly random numbers by harvesting randomness from the environment.
The method improves on previous randomness extractors because it requires only two sources of randomness, and those sources can be very weak. “It’s a big breakthrough on a fundamental problem,” says computer scientist Dana Moshkovitz of MIT. “It’s a huge improvement over anything that was done before.”
Computer scientists Eshan Chattopadhyay and David Zuckerman of the University of Texas at Austin will present the new randomness extractor June 20 in Cambridge, Mass., at the Symposium on the Theory of Computing.
Here’s another signal to file under Moore’s Law is Dead - Long live Moore’s Law - or a new Broadway Hit - Diamonds are a Computer’s Best Friend. There’s a 1 min video as well.
We recently demonstrated CMOS-compatible diamond semiconductors—with both p-type and n-type devices—by successfully fabricating diamond PIN [abbreviation for a p-type—intrinsic, undoped—n-type junction] diodes with a million-times better performance than silicon and one-thousand-times thinner
National lab spinoff has fab in Illinois
Diamonds may soon be the semiconductor industry's "best friend." Startup Akhan Semiconductor Inc. (Gurnee, Ill.) plans to make the promise of diamonds come true by licensing the diamond semiconductor process from the U.S. Energy Department's Argonne National Laboratory (Lemont, Ill.). Diamond semiconductors have been known to be faster, consume less power, be thinner and lighter weight that silicon, but Akhan Semiconductor is the first vendor with its foot-in-the-door of actually realizing its capabilities.
Akhan Semiconductor has a 200mm wafer fab in Gurnee, Ill. and expects to announce a diamond semiconductor IC in a consumer product at the Consumer Electronics Shows (CES) 2017.
Khan has also demonstrated 100-GigaHertz (GHz) devices by virtue of the ultra-low resistance of diamond, which can be deposited on silicon, glass, sapphire or metal substrates. Those kind of speeds could revitalize the processor races, which have been idled at 5-Ghz for a decade. Remember when every new processor was clocked at a higher rate. With silicon, 5-GHz is the limit, since their high power consumption and thermal hot-spots turn devices into soup, but diamond has 22-times the thermal conductivity of silicon and five-times that of copper, Kahn claims.
….its ultimate goal is to take-the-heat-off (literally) of Big Data applications with ultra-cool-running processors. In fact, the high speeds capable of diamond CMOS is traded off against heat. In other words, data centers could cut their heat vastly, by running diamond processor at the same 5-GHz of silicon, or could bump up their speed to the sub-terahertz range while consuming the same power as silicon.
This is a cool new sort of material or coating or ….. There is a 31/2 min video that fun to watch as well.
Cilllia hair looks creepy, but it can do a lot.
A while back, MIT researchers found a way to easily create 3D-printed hair: smart software can create thousands of tiny polymer strands (smaller than 100 microns, if you want) that give objects a fuzzy texture. Now, however, they're finding practical uses for those natural-feeling surfaces. If you specify the right angles, density, height and thickness, you can make the hair do surprising things. On a basic level, you can create blocks that only stick to each other under certain conditions, or paint brushes that produce very specific effects. However, it really gets interesting when you vibrate the hairs -- you can create motors and sensors that are as baffling as they are clever.
You can have objects slide along a fixed path, like the metal disc you see above. It's also possible to produce hairy motors, such as a 'windmill' that kicks in when your phone rings. And the hair is surprisingly useful for sensors. Attach a microphone and you can detect a finger brushing along the hair's surface, including its swiping speed.
This is interesting - the map here only shows Europe but it’s still worth the view. I don’t exactly know what this means - but my instinct is telling me there’s some interesting cultural factors behind this. There are 72 metal bands per million people in America. His website has a number of other interesting maps - worth the view.
One of my readers asked me to create a map showing the “density” of metal bands in European countries—and so I did. The following map shows the number of entries (which represent both active and inactive metal bands) in Encyclopaedia Metallum, divided by the country’s population in millions. For comparison: The number for the United States is 72.
And regarding the controversy surrounding the term per capita that sprang up in the comments: It is customary to use the label “per capita” in the titles of statistical studies in which a total (such as the number of metal bands) is divided by population size, even if the units used within the article itself are different.