This is how you change MIT. Change the world, MIT will catch up to it.
Neil Gershenfeld - Digital Reality
“The people who get to impose their metaphors on the culture get to define what we consider to be true.”
—Lakoff and Johnson
“Metaphors not only help us to think about the future; they are a resource deployed by a variety of actors to shape the future…Metaphors can mediate between structure and agency, but it is actors who choose to repeat old metaphors and introduce new ones. Thus, it is important to continue to monitor the metaphors at work to understand exactly what work it is that they are doing.”
The most effective metaphors—ones so fundamental that we forget they are metaphors—draw on embodied experience, or “embodied cognition,” fundamentally part of the way we think and act in the world. Industrial metaphors lack a connection to Lakoff and Johnson’s “basic domain of experience” of individuals, including our bodies, interactions with our physical environment, and interactions with other people. Industrial metaphors share an experiential perspective of a bodiless conglomerate technocratic actor, seeing like a Google, as it were. Embodied metaphors draw from the perspective of us as individual people.
Sara M. Watson on the Industrial Metaphors of Big Data - Data is the New “___”
Focus on Teams
Learning to work in teams is a main focus at Mayo—and a sharp departure from traditional training for doctors.
“The old model was, you’d go on rounds; the attending would ask a question, and the young resident had to get the right answer,” says Dr. Decker in Scottsdale. “In the new model, you’re part of a team, and somebody else might have the right answer.”
To understand the roles of team members who aren’t doctors, first-year Mayo students spend half-days shadowing clinic schedulers, registered nurses, nurse practitioners and physician assistants. They also assist in managing a panel of patients, as care coordinators do. For example, they review records to see which diabetes patients aren’t managing their health well; they call the patients on the phone to discuss why they are struggling; then the students consult with the patients’ primary-care doctors to determine the next steps.
Less Memorization
What’s being left out of medical education to make room for the new material?
Some schools are placing far less emphasis on memorizing facts, such as which drugs do what and how they interact with other drugs. Such information is now readily available electronically.
“The fund of medical knowledge is now growing and changing too fast for humans to keep up with, and the facts you memorize today might not be relevant five years from now,” says NYU’s Dr. Triola. Instead, what’s important is teaching “information-seeking behavior,” he says, such as what sources to trust and how to avoid information overload.
Innovation Sweeping U.S. Medical Schools - Preparing more doctors—for new technologies & methods
The Earth may not be flat, but the web certainly is.
“There is no ‘top’ to the World-Wide Web,” declared a 1992 foundational document from the World Wide Web Consortium—meaning that there is no central server or organizational authority to determine what does or does not get published. It is, like Borges’ famous Library of Babel, theoretically infinite, stitched together with hyperlinks rather than top-down, Dewey Decimal-style categories.1 It is also famously open—built atop a set of publicly available industry standards.
While these features have connected untold millions and created new forms of social organization, they also come at a cost. Material seems to vanish almost as quickly as it is created, disappearing amid broken links or into the constant flow of the social media “stream.” It can be hard to distinguish fact from falsehood. Corporations have stepped into this confusion, organizing our browsing and data in decidedly closed, non-transparent ways. Did it really have to turn out this way?
The Future of the Web Is 100 Years Old
Schools thus provide the trust that is needed when an employer hires a new employee or a government seeks to hold institutions accountable for educating its citizens. The question for entrepreneurs looking to truly disrupt education is not how to democratize access to learning content. That nut has already been cracked.
The real question is, like bitcoin, how do we develop an open system for measuring, verifying and credentialing learning, particularly when learning is occurring in increasingly non-traditional ways? To put it differently, how do we break the monopoly that traditional, centralized institutions have over the critical issue of validation and trust?
An effective solution to this problem would supply the needed social proof, sorely missing today, that would validate and give meaning to free and ubiquitous learning content. It would allow self-directed learners to curate their own educational pathways (or have them curated by third parties). And it would provide a universal language for learners to communicate and showcase their skills and for employers and others to be able to understand in a much more transparent and nuanced fashion whether the people they hire possess the skills needed to succeed.
Only if such a system is developed will we witness real innovation and advancement in education.
What Bitcoin Can Teach Us About Education
This is a must see 7min video - the original essay was written in 1958. This video makes us realize what a hugely complicated and complex technological reality a ‘pencil’ is, and how we been hugely interdependent for a very long time.
I, Pencil: The Movie
A film from the Competitive Enterprise Institute, adapted from the 1958 essay by Leonard E. Read. For more about I, Pencil
Here is a great must view 22min TED Talk for anyone interested in how a company can thrive for decades while being different.
Ricardo Semler: Radical wisdom for a company, a school, a life
What if your job didn’t control your life? Brazilian CEO Ricardo Semler practices a radical form of corporate democracy, rethinking everything from board meetings to how workers report their vacation days (they don’t have to). It’s a vision that rewards the wisdom of workers, promotes work-life balance — and leads to some deep insight on what work, and life, is really all about. Bonus question: What if schools were like this too?
Here is a long 103 min video about the future of work. The panel is interesting including the authors of ‘The Second Machine Age’.
The Future of Work in the Age of the Machine: Panel 1
On February 19, 2015, The Hamilton Project convened academic experts and business leaders to discuss the future of work in the machine age. Opening remarks were delivered by former U.S. Treasury Secretary Robert E. Rubin. Following opening remarks, Erik Brynjolfsson and Andrew McAfee, of the Center for Digital Business at the Massachusetts Institute of Technology Sloan School of Management and authors of the best-selling book “The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies” provided framing remarks. The first panel discussed labor market challenges, including the changing nature of work and its implications for workers of various skill types. The panel was joined by Erik Brynjolfsson, David Autor of MIT, Aneesh Chopra of Hunch Analytics, Larry Summers of Harvard University, and was moderated by Melissa Kearney of The Hamilton Project.
Here is a 25 min video that may be a UK version of the Canadian GC2020 initiative. The policy development community in the UK government is 20K strong.
What Can Design Bring to Government? By Andrea Siodmok from UK Policy Lab
The UK’s Policy Lab was launched at the beginning of April 2014. Policy Lab is a creative space where civil servants can experiment with new techniques and approaches to policy problems from data science to design. Its existence is born of a recognition that government needs to get better at policy-making; open it up, make it quicker, more digital and more connected with the people who are affected by it.
In this session Andrea shared their projects and revealed how they are working with design ethnographers, data scientists and other experts to develop new ideas in Government.
Here’s an interesting article by Stowe Boyd - the question for me is what the future workforce will actually look like - I don’t think we can simply extrapolate today’s workforce into tomorrow’s forms of work. It’s not just automation that is shifting the ground it’s new ways to engage, create value and contribute that will likely transform what we consider work and how we do it. I think we place to much emphasis on generational cadres and not enough on the personal/behavioral impact of taking up new technologies - the boomers will have time, experience, some independence (financial and otherwise) to learn many new things and may be motivated to seek unprecedented new modes of being.
Millennials and the leadership gap
As the cadre of young people called millennials soon become the largest group in the workforce, more attention is being directed to their perspectives, goals, and desires. In a series of reports I led in 2014 for Gigaom Research — “The Modern Workforce” — I tried to examine some of the differences in their thinking about workplace and work technologies.
One central finding: the millennials had similar patterns of communication — using mobile devices, communications apps, collaboration tools, and face-to-face meetings, for example — but they were less ambivalent about them than other demographic groups. The younger you are, it seems, the more willing you are to invest time and energy into new tools and techniques that promise higher degrees of productivity and connection.
The Pew Research Center has estimated that 10,000 baby boomers will turn 65 every day from 2011 to 2030. As those workers leave the workforce their roles will increasingly be filled by millennials, so the next few years will start to shed light on what a millennial-majority workplace might be like. Other research has been published recently — by LinkedIin and Deloitte — that explores other facets of the millennial puzzle.
LinkedIn surveyed workers of all ages about workplace relations and found that millennials (defined for this study as those aged 18–24) are quite different in important ways from boomers (defined as those aged 55–65). Here are some examples:
Speaking about automation - here’s an interesting question. One related to the ‘Blockchain’ protocol. Given that this technology is still rapidly development - this is well worth the read. Key issues include security as well.
How Much Longer Before Companies Start To Run Themselves?
Increasingly powerful AI is poised to threatened many well-established professions. A strong case can be made that no industry or job is safe — including the upper echelons of corporate governance. The era is fast approaching when companies will be able to operate with virtually no human intervention. Here's how it will work.
During the Industrial Revolution, workers rightfully worried about losing their jobs to machines. It's a trend that's still ongoing as newer, more powerful robots continue to displace manual laborers. But over the past few years, we've started to catch a glimpse of an entirely new kind of automation, one driven by advances in information technology and artificial intelligence.
For the first time in history, so-called "thought workers" — those employed in white collar positions — are in danger of being replaced. Here, the tired adage that new work will be created through the introduction of new automation technologies falls flat; narrow AI systems will eventually be capable of doing practically anything a human can do. The coming era of technological unemployment could prove to be a disruptive one, indeed.
Advances in automation are set to trickle-up all the way to the top levels of corporate governance. Business will soon be able to operate themselves, free from human intervention. They're called Distributed Autonomous Corporations, or Decentralized Autonomous Corporations (DAC). More simply, they're just DACs. Conceived a few years ago in the chatrooms of the Bitcoin community, these systems will be able to operate without any human involvement as they're guided by an inviolable set of business rules.
A rogue DAC, whether it's one designed to violate basic business laws or one that's just sloppily programmed, would be incredibly hard to contain and control — particularly if its utility function was modified to be evasive and its core programming capable of working beyond conventional rules. And owing to their distributed nature, these things would be very difficult for police and governments to shut-down. According to Daniel Latimer, who Aeon Magazine describes as a "fairly radical" libertarian, "decentralized technologies will make governments entirely irrelevant, ineffective and unable to do anything."
Here is a great article that blend two of key interests - Big Data and the power of metaphor, Frames, and narrative to structure how we reason about issues. A metaphor structures reasoning through inevitable entailing correspondences. Despite their constraints we is impossible to think and communicate without frames, metaphors and narratives (explicit or implicit) - new knowledge becomes ‘graspable’ through the power of the cross-domain mapping of knowledge that metaphor enables. Must Read This issues of the power of metaphor to constrain or enable understanding and reason applie to all our domains of knowledge, behavior and politics.
Data is the New “___”
Sara M. Watson on the Industrial Metaphors of Big Data
Can you fathom the depths of big data? The word fathom is a measurement of depth of the ocean, but it has also come to mean the ability to understand something. Fathom comes from faethm, meaning ‘the two arms outstretched.’ It’s 6 feet or 1.8 meters measurement is based on a standard human scale. The length of rope dropped overboard is handily measured across the span of a sailor’s armspread. The term makes the metaphorical jump to describe concepts that we are able to get our arms around; ideas are things to be grasped. As James Geary describes in his book on metaphor, “This is the primary purpose of metaphor: to carry over existing names or descriptions to things that are either so new that they haven’t been named or so abstract that they cannot be otherwise explained.”
DATA IS A NATURAL RESOURCE
- oil, gold rush, ecosystem, gathered, raw, trove
DATA IS AN INDUSTRIAL PRODUCT
- mining, refining, platform, breach, big data :: big pharma, big business
DATA IS A BYPRODUCT
- exhaust, data trail, breadcrumbs, smog, janitor, cleanser, smoke signals, signal and noise
DATA IS A MARKET
- economy, paying with data, currency, asset, vault, broker
DATA IS LIQUID
- ocean, deluge, tsunami, torrent, wave, firehose, lake
DATA AS TRENDY
- data is the new oil, data is the new currency, data is the new black, data is the new, bacon, data scientist is the sexiest job of the 21st century, frontier, revolution, wild west
DATA IS A MIRROR portrays data as something to reflect on and as a technology for seeing ourselves as others see us. But, like mirrors, data can be distorted, and can drive dysmorphic thought.
DATA IS A PRACTICE references the self-tracking process that has been criticized as navel-gazing, but which can also be a means of introspection and a practice toward self-knowledge. The quantified self motto “self-knowledge through numbers” is a misnomer; self-knowledge comes through the attentive process of choosing what to track and self-observation.
DATA IS A BODY
- footprint, fingerprint, shadow, blood, DNA, reflection, identity, portrait, profile, doppelgänger
Speaking about exponential growth of Big Data - here’s what Intel’s is saying about keeping up our exponential increase in computational capability.
Intel: Moore's Law will continue through 7nm chips
Eventually, the conventional ways of manufacturing microprocessors, graphics chips, and other silicon components will run out of steam. According to Intel researchers speaking at the ISSCC conference this week, however, we still have headroom for a few more years.
Intel plans to present several papers this week at the International Solid-State Circuits Conference in San Francisco, one of the key academic conferences for papers on chip design. Intel senior fellow Mark Bohr will also appear on a panel Monday night to discuss the challenges of moving from today's 14nm chips to the 10nm manufacturing node and beyond.
In a conference call with reporters, Bohr said that Intel believes that the current pace of semiconductor technology can continue beyond 10nm technology (expected in 2016) or so, and that 7nm manufacturing (expected in 2018) can be done without moving to expensive, esoteric manufacturing methods like ultraviolet lasers.
Here’s a wonderful article about design and how we need to more deeply think about all the elements for the emerging Internet (swarm) of Things.
Emotional Design Fail: I'm Divorcing My Nest Thermostat
Summary: Don Norman’s 3 levels of emotional design (visceral, behavioral, and reflective) helped me understand how my pure love for the Nest thermostat morphed into abhorrence. I was a proud early adopter of the unique, cool, and pretty device. It helped me save energy, and communicated to me. But things went bad when it let me down emotionally.
I bought a Nest, a “cloud-based” programmable and semi-intelligent thermostat, a few years ago after Don Norman raved about it. And Don was spot on (as he had been with other items like the iRobot vacuum and the Starck Juicy Salif). It was an exciting courtship—from taking it out of the box, to the first time I adjusted the heat in my home when I was in another country—and I really did love it.
But things went sour. Over the last few months, love morphed to agitation. Agitation turned to dislike. And dislike swelled to antipathy. How has this inanimate device gone from astounding to abominable? In the spirit of self-psychoanalysis, I have deconstructed and determined that most of the reasons can be attributed to broken UX principles and ignored UX heuristics. And most important, this is a palpable case of emotional design gone awry.
In his book, “Emotional Design: Why We Love (Or Hate) Everyday Things” Don Norman describes the 3 building blocks of emotional design. These are:
- The visceral level “is what nature does”, it “dominates physical features, has the same rules all over the world.”
- The reflective level “is all about the message, culture, the meaning of the product or its use; evoking personal remembrance; self-image.”
- The behavioral level “is all about use...function comes first.”
I can attribute each of my loves or distastes for the Nest under these 3 levels…..
Speaking about how to structure reasoning - this is an interesting very short article by the author of “The Brain that Changes Itself” and his new book. This is for the general audience - and provides an accessible survey of many recent finding related to brain plasticity and how to develop means to compensate for brain injury or malfunction.
One key dimension of what Norman Doidge points to is new approaches to enabling more tailored training and development. Assessing not just learning style - but brain structures that can be compensated and/or enhance with specialized training regimes.
This Doctor Made a Blind Man See and an Autistic Boy Speak
An 80-year-old woman with Parkinson’s pops a small neurostimulator on her tongue and two weeks later starts walking. Soon she’s balancing on a table, painting the ceiling.
Thanks to listening therapy, an autistic 3-year-old stops screaming and starts talking. A modified mix of Mozart, Gregorian chant and his mother’s voice stimulated dormant brain circuits.
The radical therapies that transform lives of little hope are vividly described by Norman Doidge, a psychiatrist and psychoanalyst, in his book The Brain’s Way of Healing: Remarkable Discoveries and Recoveries from the Frontiers of Neuroplasticity.
Here’s an interesting article about the emerging world of robotics - replace large machines with swarms of smaller bots, (imagine replacing some pesticides with swarmbots?) - something that even militaries should pay attention to.
Robots for agriculture will require new start-up companies to manufacture them
A British professor says large farm machinery engineering will be replaced by small start-up companies.
Agricultural robots are touted as the future for saving time, money and energy but also reducing damage on soils.
Professor Simon Blackmore, the head of engineering at Harper Adams University in the United Kingdom, says the large tractors have become too heavy. "Analysing the current systems, we're actually seeing as many problems from the big machines as they have solved in the past," said Professor Blackmore.
"I estimate that up to 90 per cent of the energy that goes into cultivation is there to repair the damage that the big tractors have caused in the first place.
He said agricultural robots were a major disruption to the production line of big manufacturers. "We've had a very linear development of agricultural machinery and they've got bigger all the time. With economies of scale, manufacturers have just made bigger and bigger vehicles," he said.
"I'm envisaging a complete new mechanisation system that doesn't really base itself on what we've done in the past, but on what we need now, away from industrial production, into flexible manufacturing, as we've seen in industry."
Speaking of agriculture and natural space - this is a very interesting article talking about different types of technological approaches to creating natural forests quickly - applying this to other forms of agriculture - means new types of education for agriculturalists - even urban farmers & foresters.
How to Grow a Forest Really, Really Fast
Shubhendu Sharma is working to reforest the world, one tiny patch at a time
A forest planted by humans, then left to nature’s own devices, typically takes at least 100 years to mature. What if we could make the process happen ten times faster? Eco-entrepreneur Shubhendu Sharma’s figured out a way of growing native, self-sustaining forests anywhere in the world, with the efficiency of industrial processes. He tells us how.
Back in 2008, I was an industrial engineer at Toyota in India, helping prepare assembly lines and dispatch systems for car manufacture. One day, a scientist named Akira Miyawaki came to the factory to plant a forest on Toyota’s campus. He gave a presentation on his methods, and I became so fascinated that I decided I wanted to learn how to plant a forest myself.
Miyawaki is quite famous, and very old; he’s now 87. He has planted around 40 million trees all over the world, and in 2006, he won the Blue Planet Prize, the equivalent to the Nobel Prize in the environmental field. His method’s based on what’s called “potential natural vegetation”— a theory that if a piece of land is free from human intervention, a forest will naturally self-seed and take over that land within a period of around 600 to 1,000 years, with the species that would be native and robust, and that would require no maintenance. Miyawaki’s methodology amplifies that growth process to establish a mature, native forest in ten years — ten times the normal rate of forests planted by humans.
Intrigued, I volunteered with Miyawaki and studied his methodologies, and then planted a forest of 300 trees of 42 species in a 93-square-meter plot in my back garden. It was such a success that I decided to quit the car industry to start Afforestt, a for-profit company devoted to planting native forests for all kinds of clients, from farmers to corporations to city governments.
Here’s something coming to a city near us soon.
A Vacant Lot In Wyoming Will Become One Of The World's First Vertical Farms
A unique conveyer belt design allows the three-story greenhouse to be efficient and sustainable, providing jobs and fresh produce to the Jackson community.
Jackson, Wyoming, is an unlikely place for urban farming: At an altitude over a mile high, with snow that can last until May, the growing season is sometimes only a couple of months long. It's also an expensive place to plant a garden, since an average vacant lot can cost well over $1 million.
But the town is about to become home to one of the only vertical farms in the world. On a thin slice of vacant land next to a parking lot, a startup called Vertical Harvest recently broke ground on a new three-story stack of greenhouses that will be filled with crops like microgreens and tomatoes.
"We're replacing food that was being grown in Mexico or California and shipped in," explains Penny McBride, one of the co-founders. "We feel like the community's really ready for a project like this. Everybody's so much more aware of the need to reduce transportation, and people like to know their farmer and where food's coming from."
The small plot of land is owned by the town, and the building that houses the farm will be owned by the town as well, as part of a partnership. The founders spent five years working with the city to fully vet the idea—from how well the business model can support itself to how the efficient the new building will be.
In a year, the greenhouse should be able to crank out over 37,000 tons of greens, 4,400 tons of herbs, and 44,000 tons of tomatoes. The yields are high compared to traditional farming, because of the efficiencies of the farm's hydroponic system. But it still will be only a fraction of the produce needed for the town, which has fewer than 10,000 residents but many more tourists.
Here is one answer to potential water shortages.
A Bamboo Tower That Produces Water From Air
The WarkaWater tower is an unlikely structure to find jutting from the Ethiopian landscape. At 30 feet tall and 13 feet wide, it’s not half as big as its namesake tree (which can loom 75 feet tall), but it’s striking nonetheless. The spindly tower, of latticed bamboo lined with orange polyester mesh, isn’t art—though it does kind of look like it. Rather, the structure is designed to wring water out of the air, providing a sustainable source of H2O for developing countries.
Created by Arturo Vittori and his team at Architecture and Vision, the towers harvest water from rain, fog and dew. This isn’t a new idea—people have been doing this for as long as they’ve needed water, often with air wells. Often built as high-rising stone structures, air wells gather moisture from the air and funnel it into a basin for collection. The WarkaWater functions in much the same way, using mesh netting to capture moisture and direct it into hygienic holding tank accessed via a spout.
Based on tests performed in its Italian lab, the company claims the latest iteration can harvest 13 to 26.4 gallons of water daily. That’s less than most people flush away each day, but a significant quantity in a country where some 60 million people lack sufficient potable water.
Speaking of innovating our training of people for innovation to ancient occupations.
Innovation Is Sweeping Through U.S. Medical Schools
Preparing doctors—and in greater numbers—for new technologies and methods
Critics have long faulted U.S. medical education for being hidebound, imperious and out of touch with modern health-care needs. The core structure of medical school—two years of basic science followed by two years of clinical work—has been in place since 1910.
Now a wave of innovation is sweeping through medical schools, much of it aimed at producing young doctors who are better prepared to meet the demands of the nation’s changing health-care system.
At the new Hofstra North Shore-LIJ School of Medicine in Hempstead, N.Y, students spend their first eight weeks not in lecture classes but becoming certified emergency medical technicians, learning split-second lifesaving skills on 911 calls.
Doctors today are well schooled in the science of medicine, says Susan Skochelak, the American Medical Association’s vice president for medical education. “What’s been missing is the science of health-care delivery. How do you manage chronic disease? How do you focus on prevention and wellness? How do you work in a team?”
To that end, in April, a new MCAT—the Medical College Admission Test—will be administered, the test’s first major revision since 1991. The new version is 2 hours longer (6 hours and 30 minutes) and tests knowledge of behavioral and social sciences as well as biology, physics and chemistry. One sample question has applicants read a passage, then asks which of four statements “is most consistent with the sociological paradigm of symbolic interactionism?”
This is a very important breakthrough - a MUST READ for anyone interested in the future of domesticated DNA. Includes a 5 min video - which explains the breakthrough - CRISPR - very well - it’s here https://www.youtube.com/watch?v=2pp17E4E-O8#t=239
The biggest biotech discovery of the century is about to change medicine forever
Jennifer Doudna, a biochemist at the University of California, Berkeley, and her collaborator, Emmanuelle Charpentier of the Helmholtz Centre for Infection Research in Germany, each received $3 million for their invention of a potentially revolutionary tool for editing DNA known as CRISPR.
It was only in 2012 that Doudna, Charpentier and their colleagues offered the first demonstration of CRISPR’s potential. They crafted molecules that could enter a microbe and precisely snip its DNA at a location of the researchers’ choosing. In January 2013, the scientists went one step further: They cut out a particular piece of DNA in human cells and replaced it with another one.
A scientific stampede commenced, and in just the past two years, researchers have performed hundreds of experiments on CRISPR. Their results hint that the technique may fundamentally change both medicine and agriculture.
Some scientists have repaired defective DNA in mice, for example, curing them of genetic disorders. Plant scientists have used CRISPR to edit genes in crops, raising hopes that they can engineer a better food supply. Some researchers are trying to rewrite the genomes of elephants, with the ultimate goal of re-creating a woolly mammoth.
Writing last year in the journal Reproductive Biology and Endocrinology, Motoko Araki and Tetsuya Ishii of Hokkaido University in Japan predicted that doctors will be able to use CRISPR to alter the genes of human embryos "in the immediate future."
Speaking of biotechnology - here’s something to think about for the next decade (in combination with other breakthroughs)
Brain Organoids
A new method for growing human brain cells could unlock the mysteries of dementia, mental illness, and other neurological disorders.
Availability: now
As Madeline Lancaster lifts a clear plastic dish into the light, roughly a dozen clumps of tissue the size of small baroque pearls bob in a peach-colored liquid. These are cerebral organoids, which possess certain features of a human brain in the first trimester of development—including lobes of cortex. The bundles of human tissue are not exactly “brains growing in a dish,” as they’re sometimes called. But they do open a new window into how neurons grow and function, and they could change our understanding of everything from basic brain activities to the causes of schizophrenia and autism.
Before it grows in one of Lancaster’s dishes, a brain organoid begins as a single skin cell taken from an adult. With the right biochemical prodding, that cell can be turned into an induced pluripotent stem cell (the kind that can mature into one of several types of cells) and then into a neuron. This makes it possible to do things that were impossible before. Now scientists can directly see how networks of living human brain cells develop and function, and how they’re affected by various drug compounds or genetic modifications. And because these mini-brains can be grown from a specific person’s cells, organoids could serve as unprecedentedly accurate models for a wide range of diseases. What goes wrong, for example, in neurons derived directly from someone with Alzheimer’s disease?
The prospect of finding answers to such questions is leading pharmaceutical companies and academic researchers to seek collaborations with Lancaster and Jürgen Knoblich, whose lab at the Institute of Molecular Biotechnology (IMBA) in Vienna, Austria, is where Lancaster developed the organoids as a postdoc. The first of these collaborations was an investigation of microcephaly, a disorder characterized by small brain size, with Andrew Jackson of the University of Edinburgh. Using cells derived from a patient with microcephaly, the team cultured organoids that shared characteristics with the patient’s brain. Then the researchers replaced a defective protein associated with the disorder and were able to culture organoids that appeared partially cured.
This is just the beginning, says Lancaster. Researchers such as Rudolph Jaenisch at MIT and Guo-li Ming at Johns Hopkins are beginning to use brain organoids to investigate autism, schizophrenia, and epilepsy.
Coming to ground forces in the near future?
SQUAD X CORE TECHNOLOGIES SEEKS TO BRING TECHNOLOGICAL ADVANCES TO THE INFANTRY SQUAD
Warfighters in aircraft, on ships and in ground vehicles have benefited tremendously from technological advances in recent decades, with advanced capabilities ranging from real-time situational awareness to precision armaments. But many of these benefits depend on equipment with substantial size, weight and power requirements, and so have remained unavailable to dismounted infantry squads who must carry all their equipment themselves.
DARPA’s new Squad X Core Technologies (SXCT) program aims to address this challenge and ensure that dismounted infantry squads maintain uncontested tactical superiority over potential adversaries without being overburdened by cumbersome hardware. The goal is to speed the development of new, lightweight, integrated systems that provide infantry squads unprecedented awareness, adaptability and flexibility in complex environments, and enable dismounted Soldiers and Marines to more intuitively understand and control their complex mission environments.
“SXCT aims to help dismounted infantry squads have deep awareness of what’s around them, detect threats from farther away and, when necessary, engage adversaries more quickly and precisely than ever before,” said Maj. Christopher Orlowski, DARPA program manager. “We are working towards advanced capabilities that would make dismounted infantry squads more adaptable, safe and effective.”
Here’s something that we should be paying attention to - a new form of civil force, civil participation, civil security. Not just us - but our swarms of IoT devices could provide emergency help anyware - anytime.
Drone hobbyists can now volunteer to assist humanitarian crises
UAViators is a humanitarian UAV network, which signs up experienced amateur drone operators who would be willing to provide disaster relief.
The latest technology is frequently only accessible to large companies and wealthy enthusiasts — those who can afford the high cost of groundbreaking products. Often devices, such as 3D printers and drones, could have a huge positive impact on underprivileged people and areas, but the cost proves to be prohibitive. We have already seen numerous enterprises designed to help bridge this gap — such as E-Nable, which connects 3D printing enthusiasts with those in need of prosthetic hands. The latest of these “bridging” initiatives is UAViators — a humanitarian UAV network — which signs up experienced drone operators who would be willing to provide disaster relief.
UAViators is a subsidiary of the Qatar Computing Research Institute, set up by Director of Social Innovation Patrick Meier, in order to capitalise on the potential of of UAVs to assist with humanitarian disasters. It is an volunteer community, united by their interest and expertise with drones, who are ready to help in times of crisis, and are keen to receive training and advice from aid groups in the meantime.
Here’s a 5 min video from former astronaut about building an alien intelligence AI.
Former Astronaut Dan Barry: How to Meet an Alien (AI)
“We have enough human brains, I have no interest in replicating the human brain.”
-Daniel Barry
What is the appeal of building an artificial brain? For some individuals, an artificial brain represents a chance to fully understand one’s own nature. For others, the artificial mind represents self-improvement, an opportunity to make their human brain faster, smarter, less forgetful, geared towards long-term thinking. Longevity-minded individuals find the allure of artificial intelligence to be linked to the promise of a brain that can transcend the fragile mortality found in the biological models. Optimists pitch artificial intelligence to the world as our ultimate servant, the benevolent, rational being that only wants to help. Anxious visionaries warn that it could be an enslaver, a humanity killer, an existential risk.
Engineer and Former Astronaut Dan Barry has a perspective on artificial minds that does not have the ring of a rapture or armageddon, but is in its own way, otherworldly.
Speaking of intelligence - here’s a short article - that is great in and of itself but also points to the emerging importance of environment in shaping the brain.
Newborn neurons in the adult brain may help us adapt to the environment
The discovery that the human brain continues to produce new neurons in adulthood challenged a major dogma in the field of neuroscience, but the role of these neurons in behavior and cognition is still not clear. In a review article published by Cell Press February 21st in Trends in Cognitive Sciences, Maya Opendak and Elizabeth Gould of Princeton University synthesize the vast literature on this topic, reviewing environmental factors that influence the birth of new neurons in the adult hippocampus, a region of the brain that plays an important role in memory and learning.
The authors discuss how the birth of such neurons may help animals and humans adapt to their current environment and circumstances in a complex and changing world. They advocate for testing these ideas using naturalistic designs, such as allowing laboratory rodents to live in more natural social burrow settings and observing how circumstances such as social status influence the rate at which new neurons are born.
In recent years, it has become increasingly clear that environmental influences have a profound effect on the adult brain in a wide range of mammalian species. Stressful experiences, such as restraint, social defeat, exposure to predator odors, inescapable foot shock, and sleep deprivation, have been shown to decrease the number of new neurons in the hippocampus. By contrast, more rewarding experiences, such as physical exercise and mating, tend to increase the production of new neurons in the hippocampus.
Because many studies that investigate adult neurogenesis use controlled laboratory conditions, the relevance of the findings to real-world circumstances remains unclear. The use of a visible burrow system—a structure consisting of tubes, chambers, and an open field—has allowed researchers to recreate the conditions that allow for the production of dominance hierarchies that rats naturally form in the wild, replicating the stressors, rewards, and cognitive processes that accompany this social lifestyle.
For Fun
Here’s a great development - watching AI warfare.
Why People Are Making The AI Fight Itself In Civilization
A strange thing happened in the Civilization community r/civ on January 10, 2015. Inspired by similar, smaller-scale offerings by a Twitch.tv livestream and fellow redditor DarkLava (from whom he explicitly sought permission), user Jasper K., aka thenyanmaster, shared the first part of an experiment he was conducting wherein he put 42 computer-controlled civilisations in their real-life locations on a giant model of the Earth and left them to duke it out in a battle to the death, Highlander style (except instead of heads they need capital cities).
Since then, the practice has exploded in popularity. Reddit’s Civilization community has AI-only fever, but what exactly is so compelling about watching the computer play a very slow-paced turn-based strategy game with itself?
As I write this now, at 16 parts, 363 turns, and 4,130 years deep into the epic, still-unravelling storyline, Poland stands among the contenders for victory, with four (soon to be five) capital cities conquered and a red army that dominates Eastern Europe.
Thenyanmaster’s game of “what if?” has won such a following – particularly after a nod on the 4.8-million-subscribers-strong Bestof subreddit – that copycats have sprung up all over the place. There’s an African version, a Civ IV Rhye’s and Fall of Civilization mod version, one with just the British Isles, another with 20 civs on a map meant for just two, and at least 10 more – not counting the AI-only campaigns some people run privately to satiate their own curiosity or to compare with their favourite ongoing narrative. Now there’s even an official community game being organised and run (including a live Twitch stream) by subreddit mod TPangolin, with 42 civs and 42 mods enabled, which has dominated the community discussions since it was announced — there are separate threads for in-jokes, pre-game analyses, and trash talking, amongst other things.
For anyone who ever wondered what happened to the children of the Adam’s Family? Here’s one answer. :) The Youtube channel is now well into season 2.
Adult Wednesday Addams: The Apartment Hunt [S1, Ep 1]
Wednesday has finally moved out of the Addams Manor. // New episodes every Wednesday.
No comments:
Post a Comment