Translate

Pages

Pages

Pages

Intro Video
Showing posts with label MIT News. Show all posts
Showing posts with label MIT News. Show all posts

Thursday, September 17, 2020

Engineers produce a fisheye lens that’s completely flat

To capture panoramic views in a single shot, photographers typically use fisheye lenses — ultra-wide-angle lenses made from multiple pieces of curved glass, which distort incoming light to produce wide, bubble-like images. Their spherical, multipiece design makes fisheye lenses inherently bulky and often costly to produce.

Now engineers at MIT and the University of Massachusetts at Lowell have designed a wide-angle lens that is completely flat. It is the first flat fisheye lens to produce crisp, 180-degree panoramic images. The design is a type of “metalens,” a wafer-thin material patterned with microscopic features that work together to manipulate light in a specific way.

In this case, the new fisheye lens consists of a single flat, millimeter-thin piece of glass covered on one side with tiny structures that precisely scatter incoming light to produce panoramic images, just as a conventional curved, multielement fisheye lens assembly would. The lens works in the infrared part of the spectrum, but the researchers say it could be modified to capture images using visible light as well.

The new design could potentially be adapted for a range of applications, with thin, ultra-wide-angle lenses built directly into smartphones and laptops, rather than physically attached as bulky add-ons. The low-profile lenses might also be integrated into medical imaging devices such as endoscopes, as well as in virtual reality glasses, wearable electronics, and other computer vision devices.

“This design comes as somewhat of a surprise, because some have thought it would be impossible to make a metalens with an ultra-wide-field view,” says Juejun Hu, associate professor in MIT’s Department of Materials Science and Engineering. “The fact that this can actually realize fisheye images is completely outside expectation.

This isn’t just light-bending — it’s mind-bending.”

Hu and his colleagues have published their results today in the journal Nano Letters. Hu’s MIT coauthors are Mikhail Shalaginov, Fan Yang, Peter Su, Dominika Lyzwa, Anuradha Agarwal, and Tian Gu, along with Sensong An and Hualiang Zhang of UMass Lowell.

Design on the back side

Metalenses, while still largely at an experimental stage, have the potential to significantly reshape the field of optics. Previously, scientists have designed metalenses that produce high-resolution and relatively wide-angle images of up to 60 degrees. To expand the field of view further would traditionally require additional optical components to correct for aberrations, or blurriness — a workaround that would add bulk to a metalens design.

Hu and his colleagues instead came up with a simple design that does not require additional components and keeps a minimum element count. Their new metalens is a single transparent piece made from calcium fluoride with a thin film of lead telluride deposited on one side. The team then used lithographic techniques to carve a pattern of optical structures into the film.

Each structure, or “meta-atom,” as the team refers to them, is shaped into one of several nanoscale geometries, such as a rectangular or a bone-shaped configuration, that refracts light in a specific way. For instance, light may take longer to scatter, or propagate off one shape versus another — a phenomenon known as phase delay.

In conventional fisheye lenses, the curvature of the glass naturally creates a distribution of phase delays that ultimately produces a panoramic image. The team determined the corresponding pattern of meta-atoms and carved this pattern into the back side of the flat glass.

‘We’ve designed the back side structures in such a way that each part can produce a perfect focus,” Hu says.

On the front side, the team placed an optical aperture, or opening for light.

“When light comes in through this aperture, it will refract at the first surface of the glass, and then will get angularly dispersed,” Shalaginov explains. “The light will then hit different parts of the backside, from different and yet continuous angles. As long as you design the back side properly, you can be sure to achieve high-quality imaging across the entire panoramic view.”

Across the panorama

In one demonstration, the new lens is tuned to operate in the mid-infrared region of the spectrum. The team used the imaging setup equipped with the metalens to snap pictures of a striped target. They then compared the quality of pictures taken at various angles across the scene, and found the new lens produced images of the stripes that were crisp and clear, even at the edges of the camera’s view, spanning nearly 180 degrees.

“It shows we can achieve perfect imaging performance across almost the whole 180-degree view, using our methods,” Gu says.

In another study, the team designed the metalens to operate at a near-infrared wavelength using amorphous silicon nanoposts as the meta-atoms. They plugged the metalens into a simulation used to test imaging instruments. Next, they fed the simulation a scene of Paris, composed of black and white images stitched together to make a panoramic view. They then ran the simulation to see what kind of image the new lens would produce.

“The key question was, does the lens cover the entire field of view? And we see that it captures everything across the panorama,” Gu says. “You can see buildings and people, and the resolution is very good, regardless of whether you’re looking at the center or the edges.”

The team says the new lens can be adapted to other wavelengths of light. To make a similar flat fisheye lens for visible light, for instance, Hu says the optical features may have to be made smaller than they are now, to better refract that particular range of wavelengths. The lens material would also have to change. But the general architecture that the team has designed would remain the same.

The researchers are exploring applications for their new lens, not just as compact fisheye cameras, but also as panoramic projectors, as well as depth sensors built directly into smartphones, laptops, and wearable devices.

“Currently, all 3D sensors have a limited field of view, which is why when you put your face away from your smartphone, it won’t recognize you,” Gu says. “What we have here is a new 3D sensor that enables panoramic depth profiling, which could be useful for consumer electronic devices.”

This research was funded in part by DARPA under the EXTREME Program.



from MIT News https://ift.tt/3iGWsjx
via Gabe's Musing's

Teaching mechanical engineering in a pandemic

Educators across the globe spent much of the summer preparing for an academic year unlike any in history. Well before MIT announced its plans for fall semester in early July, faculty and teaching staff across the Institute had spent weeks revamping their fall classes for a number of scenarios. With the majority of classes being taught remotely and extensive safety protocols in place for classes with in-person components, teaching staff had to get creative.

“Our teaching team is stubborn and we were not going to give up,” says Nevan Hanumara SM ’06, PhD ’12, research scientist and instructor in class 2.75 (Medical Device Design). “We decided that we had to find a way to deliver a good educational experience no matter where the students are — they could be at home with their families, sharing a dining table with housemates, or in the dorms.” 

Like Hanumara and the 2.75 teaching team, faculty and staff across MIT have had to completely revise their classes to either be fully remote or ensure that any in-person component is done in a way that keeps students and the wider community safe. The hands-on and project-based nature of many mechanical engineering classes posed a unique challenge for faculty and staff in MIT’s Department of Mechanical Engineering (MechE).

Maximizing lab time

Students in class 2.008 (Design and Manufacturing II), have the option of attending some in-person components of the course or taking it fully remote. To keep everyone safe, the 2.008 teaching team plans to maximize lab time.

All class-based portions of the course, including lectures, will be remote, as has been the case for the past several years in 2.008. During the lab-based portions, teams of roughly five students will be tasked with designing and building 50 yo-yos using injection molding with the assistance of staff in MIT’s Laboratory for Manufacturing and Productivity (LMP).

Joseph Wight, manufacturing lab manager, in collaboration with the 2.008 staff and other instructors in MechE, has developed a system to ensure students are fully familiar with lab equipment before they step into LMP by broadcasting instructions remotely before in-person lab time.

“I have cameras pointing at the spindles, the machines, and the screens that control the machines we use in the course,” says Wight. “The objective is to get students as comfortable as possible before they enter into the shop so that when they show up, they know what machine they're going to use and how to use it.”

Students who are on campus and participating in the in-person components are asked to relay details of their lab experiences with their remote team members.

Outside the lab, Wight and the rest of the teaching team are having the students utilize more computer aided design and simulation software to engage the remote students and add more engagement to the entire design and manufacturing process.

“We’re going to do our best to give students what they need to finish this course and have a great experience, but the caveat is that it’s going to be different,” adds Wight. “In a way, we are preparing students for how to work remotely, which is going to be a part of whatever career they choose moving forward.”

Mimicking the hybrid model of some remote and in-person employees in industry is something the teaching team of 2.75 (Medical Device Design) is also exploring this semester.

Sending “mechanical gizmo” kits

Like Wight and the 2.008 staff, the teaching team of 2.75 designed a course that could either be taken fully remotely or with some in-person components.

“We realized we have an opportunity for a unique blended learning experience with some team members remote and some in-person, just as it would be in industry nowadays,” says Hanumara.

Wherever students are this semester, they were sent a kit of materials — or “mechanical gizmos” — assembled by Alexander Slocum '82 SM '83 PhD '85, the Walter M. May and A. Hazel May Professor, and his wife Debra Slocum SM '89, as well as a kit of basic electronics, designed by veteran instructor Gim Hom '71 SM '72 EE '73 SM '73. Using materials from the kits, students will assemble small wooden precision fixtures and their own heart rate monitors. In most instances, students won’t know what they can use the kit of materials to build until lecture has started, live streaming from Slocum’s home workshop.

As in a typical semester, students in 2.75 will also pick from a list of projects to design and build a medical device prototype. This year, students can choose from projects led by Giovanni Traverso, the Karl Van Tassel (1925) Career Development Professor; Ellen Roche, the W.M. Keck Foundation Career Development Professor; and others proposed by clinician-collaborators that the team has assembled.  

For Hanumara, this semester offers an opportunity to garner insights into how to make education more accessible for communities that are geographically isolated or for individuals with inflexible schedules.

“Fall 2020 is going to be different, but it is an incredible experiment in new modalities of teaching,” he adds. “What we learn from this semester at MIT will carry forward.”

Prioritizing self-directed projects

Technical instructor Steve Banzaert and the team of faculty and instructors in class 2.678 (Electronics for Mechanical Systems), took lessons from the course’s spring 2020 unit to shape plans for fall semester. The class is being taught fully remote.

“Our big takeaway from the spring was that in order for students to get as much out of the subject as we wanted, we had to transform the class to focus on more self-directed project work,” says Banzaert.

As a result, students will be asked to complete open-ended projects on longer time frames than usual. Using a kit of materials sent to them over the summer, students will build devices and circuit boards that help them learn about physical phenomena associated with electronics.

To support students as they embark on their self-directed projects, the teaching staff has set up a “call center” to answer students’ questions at any time of the day throughout the week.

“Over the years, we have tried to build a welcoming and friendly community in this class. That’s the thing I’m hoping to translate the most into this online space,” adds Banzaert.

Building community in synchronous small group meetings

Community is at the center of another mechanical engineering class this semester — 2.001 (Mechanics and Materials I).

For many sophomores, 2.001 serves as their introduction to mechanical engineering at MIT. New bonds and friendships form during lectures, labs, and recitations.

“This class is where the MechE community forms. Students start getting to know each other and working together,” explains Simona Socrate, senior lecturer. “With students isolated in their own homes, bringing this sense of community is one of our biggest challenges.”

To help foster this community while the course is being taught fully remotely, Socrate and her fellow instructors are offering a number of synchronous small group meetings. Using whiteboard applications, teaching staff will interact with small groups of students to teach them core concepts.

These synchronous meetings are supported by a kit of fun, everyday materials and engineering components that were selected to illustrate key course concepts and shipped to students this summer. The kit also includes custom parts, manufactured by Professor Ely Sachs, to allow the students to conduct the course’s hands-on “Discovery Labs” in their own homes. While the instructor teaches a structural mechanics concept on Zoom, students can open their kit and refer to the object in question. Materials include small pool noodles, exercise bands, Twizzlers, locking pliers, finger traps, and Silly Putty.

“We try to make the synchronous time a combination of learning and personal interactions. The kits of materials help us all interact together online and make class time more engaging,” says Socrate.

Instructors like Socrate may not be able to predict how the Covid-19 pandemic will alter the rest of the academic year, but they will continue to innovate and develop new methods of teaching to provide students with the best possible educational experience in any circumstance.



from MIT News https://ift.tt/2EaxdH5
via Gabe's Musing's

Thursday, September 10, 2020

Monitoring sleep positions for a healthy rest

MIT researchers have developed a wireless, private way to monitor a person’s sleep postures — whether snoozing on their back, stomach, or sides — using reflected radio signals from a small device mounted on a bedroom wall.

The device, called BodyCompass, is the first home-ready, radio-frequency-based system to provide accurate sleep data without cameras or sensors attached to the body, according to Shichao Yue, who will introduce the system in a presentation at the UbiComp 2020 conference on Sept. 15. The PhD student has used wireless sensing to study sleep stages and insomnia for several years.

“We thought sleep posture could be another impactful application of our system” for medical monitoring, says Yue, who worked on the project under the supervision of Professor Dina Katabi in the MIT Computer Science and Artificial Intelligence Laboratory. Studies show that stomach sleeping increases the risk of sudden death in people with epilepsy, he notes, and sleep posture could also be used to measure the progression of Parkinson’s disease as the condition robs a person of the ability to turn over in bed.

In the future, people might also use BodyCompass to keep track of their own sleep habits or to monitor infant sleeping, Yue says: “It can be either a medical device or a consumer product, depending on needs.”

Other authors on the conference paper, published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, include graduate students Yuzhe Yang and Hao Wang, and Katabi Lab affiliate Hariharan Rahul. Katabi is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.

Restful reflections

BodyCompass works by analyzing the reflection of radio signals as they bounce off objects in a room, including the human body. Similar to a Wi-Fi router attached to the bedroom wall, the device sends and collects these signals as they return through multiple paths. The researchers then map the paths of these signals, working backward from the reflections to determine the body’s posture.

For this to work, however, the scientists needed a way to figure out which of the signals were bouncing off the sleeper’s body, and not bouncing off the mattress or a nightstand or an overhead fan. Yue and his colleagues realized that their past work in deciphering breathing patterns from radio signals could solve the problem.

Signals that bounce off a person’s chest and belly are uniquely modulated by breathing, they concluded. Once that breathing signal was identified as a way to “tag” reflections coming from the body, the researchers could analyze those reflections compared to the position of the device to determine how the person was lying in bed. (If a person was lying on her back, for instance, strong radio waves bouncing off her chest would be directed at the ceiling and then to the device on the wall.) “Identifying breathing as coding helped us to separate signals from the body from environmental reflections, allowing us to track where informative reflections are,” Yue says.

Reflections from the body are then analyzed by a customized neural network to infer how the body is angled in sleep. Because the neural network defines sleep postures according to angles, the device can distinguish between a sleeper lying on the right side from one who has merely tilted slightly to the right. This kind of fine-grained analysis would be especially important for epilepsy patients for whom sleeping in a prone position is correlated with sudden unexpected death, Yue says.

BodyCompass has some advantages over other ways of monitoring sleep posture, such as installing cameras in a person’s bedroom or attaching sensors directly to the person or their bed. Sensors can be uncomfortable to sleep with, and cameras reduce a person’s privacy, Yue notes. “Since we will only record essential information for detecting sleep posture, such as a person’s breathing signal during sleep,” he says, “it is nearly impossible for someone to infer other activities of the user from this data.”

An accurate compass

The research team tested BodyCompass’ accuracy over 200 hours of sleep data from 26 healthy people sleeping in their own bedrooms. At the start of the study, the subjects wore two accelerometers (sensors that detect movement) taped to their chest and stomach, to train the device’s neural network with “ground truth” data on their sleeping postures.

BodyCompass was most accurate — predicting the correct body posture 94 percent of the time — when the device was trained on a week’s worth of data. One night’s worth of training data yielded accurate results 87 percent of the time. BodyCompass could achieve 84 percent accuracy with just 16 minutes’ worth of data collected, when sleepers were asked to hold a few usual sleeping postures in front of the wireless sensor.

Along with epilepsy and Parkinson’s disease, BodyCompass could prove useful in treating patients vulnerable to bedsores and sleep apnea, since both conditions can be alleviated by changes in sleeping posture. Yue has his own interest as well: He suffers from migraines that seem to be affected by how he sleeps. “I sleep on my right side to avoid headache the next day,” he says, “but I’m not sure if there really is any correlation between sleep posture and migraines. Maybe this can help me find out if there is any relationship.”

For now, BodyCompass is a monitoring tool, but it may be paired someday with an alert that can prod sleepers to change their posture. “Researchers are working on mattresses that can slowly turn a patient to avoid dangerous sleep positions,” Yue says. “Future work may combine our sleep posture detector with such mattresses to move an epilepsy patient to a safer position if needed.”



from MIT News https://ift.tt/32iJv9G
via Gabe's Musing's

Wednesday, September 9, 2020

Digitizing supply chains to lift farmers out of poverty

Millions of cocoa farmers live in poverty across western Africa. Over the years, these farmers have been forced to contend with geopolitical instability, predatory loan practices, and a general lack of information that hampers their ability to maximize yields and sell crops at fair prices. Other problems, such as deforestation and child labor, also plague the cocoa industry.

For the last five years, however, cocoa supply chains in villages around the Ivory Coast, Cameroon, and Ghana have been transformed. A suite of digital solutions have improved profitability for more than 200,000 farmers, encouraged sustainable and ethical production practices, and made cocoa supply chains more traceable and efficient.

The progress was enabled by SourceTrace, a company that helps improve agricultural supply chains around the world. SourceTrace offers tools to help manage and sell crops, buy and track goods, and trace products back to the farms where they were made.

Through partnerships with farmer cooperatives, financial institutions, governments, and consumer brands, SourceTrace has impacted more than 1.2 million farmers across 28 countries.

CEO Venkat Maroju MBA ’07 believes the company’s success comes from the idea that the only way to improve one part of agricultural supply chains is to improve every part.

“The whole idea of our platform is to make the agricultural value chain sustainable, predictable, profitable, equitable, and traceable,” Venkat says.

Illuminating supply chains

Maroju grew up in a rural region of Telangana in southern India in what he describes as “very humble beginnings.” When it rained, his school was cancelled. He also studied in the local language until 12th grade, adding to the difficulty of his college entrance exam.

Through an affirmative action program, Maroju earned admittance to an engineering university, and his English improved over the next four years. He went on to get his master’s degree at the Indian Institute of Science and later came to the United States to pursue his PhD at Old Dominion University in Virginia.

Following his PhD, Maroju stayed in the U.S., but he became active in the politics of his home region of Telangana, including in that area’s push for statehood, which was achieved in 2014.

During that time, Maroju learned a lot about the hardships associated with small plot farming, the main profession for more than 60 percent of the people in the Telangana region. Over the last two decades, such farmers have had to contend with dramatic changes to agricultural policies following the country’s economic liberalization, as well as predatory lending practices that have led to a large number of farmer suicides.

In 2005, Maroju came to MIT for his MBA with the Sloan Fellows Program. As part of his thesis, he studied microfinance in India and considered how the rise of cell phone ownership offered an unprecedented opportunity to help people in rural areas.

“I always had a lot of passion for social issues,” Maroju says. “Coming from a humble background, I’ve seen the struggles of poverty.”

When Maroju finished his thesis in 2007, it caught the attention of Gray Ghost Ventures, an impact-driven investment firm that was working with the newly formed Legatum Center for Development and Entrepreneurship at MIT. Gray Ghost brought Maroju on as an advisor, where he was introduced to a struggling technology company named SourceTrace, which offered branchless or agent banking solutions. Venkat suggested shifting SourceTrace’s focus to agriculture, and the company’s investors liked the idea.

He became CEO of SourceTrace in 2013, setting out to build new solutions to address each step of the agricultural supply chain.

“In agriculture, you can’t do anything in isolation,” Maroju says. “We always viewed it as an entire value chain, from consumer demand to nutrients used to safety of food. It all has an impact. All the players, from input suppliers to extension organizations, to buyers, processors, logistics, there’s a role to play for all of them, and we’ve always thought to make an impact you have to build end to end.”

Accordingly, SourceTrace’s platform includes features for everyone. Supply chain partners can use SourceTrace to buy crops, coordinate and track handoffs, and monitor storage conditions. Consumers can scan an item’s QR code at supermarkets and retail stores and learn about the farm where it came from, including that farm’s production processes.

Of course, the platform offers the most features to farmers, who can use it to get personalized advice on crop management, obtain fair trade and environmental certifications, monitor weather and pest attacks, and sell crops at fair market prices.

“All these solutions are targeted for businesses, governments, farmer cooperatives, financial institutions, so it’s a [business to business] software,” Maroju says. “But the common denominator is these [businesses] are all working with farmers. We’ve always focused on the farmers. I’ve always been passionate about smallholder farmers and we really want to give back.”

Focusing on the farmers

In addition to SourceTrace’s success with cocoa farmers in West Africa, the company has helped rice and maize farmers in Nigeria, grain farmers in Zimbabwe, organic cotton and spice farmers in India, seed producers in Bangladesh, and others. In total, SourceTrace’s platform is being used to improve production practices for 350 different crops around the world.

Maroju, who has been a mentor at the Legatum Center for the last several years, credits the center for helping the company scale across Africa. Today about 60 percent of SourceTrace’s farmers hail from the continent.

Much of the company’s success comes from leveraging the newly ubiquitous connectivity in developing countries and advances in smartphones. The company also uses remote sensing capabilities, artificial intelligence, blockchain, and QR codes to make its platform more effective.

But Maroju says the technologies are a means to an end: The production improvements they unlock must help farmers secure long-term buyers and higher margins. The ultimate goal of the company is transforming the lives of some of the world’s poorest people.

“It’s all about farmer livelihood,” Maroju says. “With all this technology, we enable the farmers to access the best markets wherever globally available. Then we help them optimize their inputs and make procurement processes more reliable and minimize the risk. It all comes back to the farmers.”



from MIT News https://ift.tt/3m7N9v6
via Gabe's Musing's

Tuesday, September 8, 2020

Velcro-like food sensor detects spoilage and contamination

MIT engineers have designed a Velcro-like food sensor, made from an array of silk microneedles, that pierces through plastic packaging to sample food for signs of spoilage and bacterial contamination.

The sensor’s microneedles are molded from a solution of edible proteins found in silk cocoons, and are designed to draw fluid into the back of the sensor, which is printed with two types of specialized ink. One of these “bioinks” changes color when in contact with fluid of a certain pH range, indicating that the food has spoiled; the other turns color when it senses contaminating bacteria such as pathogenic E. coli.

The researchers attached the sensor to a fillet of raw fish that they had injected with a solution contaminated with E. coli. After less than a day, they found that the part of the sensor that was printed with bacteria-sensing bioink turned from blue to red — a clear sign that the fish was contaminated. After a few more hours, the pH-sensitive bioink also changed color, signaling that the fish had also spoiled.

The results, published today in the journal Advanced Functional Materials, are a first step toward developing a new colorimetric sensor that can detect signs of food spoilage and contamination.

Such smart food sensors might help head off outbreaks such as the recent salmonella contamination in onions and peaches. They could also prevent consumers from throwing out food that may be past a printed expiration date, but is in fact still consumable.

“There is a lot of food that’s wasted due to lack of proper labeling, and we’re throwing food away without even knowing if it’s spoiled or not,” says Benedetto Marelli, the Paul M. Cook Career Development Assistant Professor in MIT’s Department of Civil and Environmental Engineering. “People also waste a lot of food after outbreaks, because they’re not sure if the food is actually contaminated or not. A technology like this would give confidence to the end user to not waste food.”

Marelli’s co-authors on the paper are Doyoon Kim, Yunteng Cao, Dhanushkodi Mariappan, Michael S. Bono Jr., and A. John Hart.

Silk and printing

The new food sensor is the product of a collaboration between Marelli, whose lab harnesses the properties of silk to develop new technologies, and Hart, whose group develops new manufacturing processes.

Hart recently developed a high-resolution floxography technique, realizing microscopic patterns that can enable low-cost printed electronics and sensors. Meanwhile, Marelli had developed a silk-based microneedle stamp that penetrates and delivers nutrients to plants. In conversation, the researchers wondered whether their technologies could be paired to produce a printed food sensor that monitors food safety.

“Assessing the health of food by just measuring its surface is often not good enough. At some point, Benedetto mentioned his group’s microneedle work with plants, and we realized that we could combine our expertise to make a more effective sensor,” Hart recalls.

The team looked to create a sensor that could pierce through the surface of many types of food. The design they came up with consisted of an array of microneedles made from silk.

“Silk is completely edible, nontoxic, and can be used as a food ingredient, and it’s mechanically robust enough to penetrate through a large spectrum of tissue types, like meat, peaches, and lettuce,” Marelli says.

A deeper detection

To make the new sensor, Kim first made a solution of silk fibroin, a protein extracted from moth cocoons, and poured the solution into a silicone microneedle mold. After drying, he peeled away the resulting array of microneedles, each measuring about 1.6 millimeters long and 600 microns wide — about one-third the diameter of a spaghetti strand.

The team then developed solutions for two kinds of bioink — color-changing printable polymers that can be mixed with other sensing ingredients. In this case, the researchers mixed into one bioink an antibody that is sensitive to a molecule in E. coli. When the antibody comes in contact with that molecule, it changes shape and physically pushes on the surrounding polymer, which in turn changes the way the bioink absorbs light. In this way, the bioink can change color when it senses contaminating bacteria.

The researchers made a bioink containing antibodies sensitive to E. coli, and a second bioink sensitive to pH levels that are associated with spoilage. They printed the bacteria-sensing bioink on the surface of the microneedle array, in the pattern of the letter “E,” next to which they printed the pH-sensitive bioink, as a “C.” Both letters initially appeared blue in color.

Kim then embedded pores within each microneedle to increase the array’s ability to draw up fluid via capillary action. To test the new sensor, he bought several fillets of raw fish from a local grocery store and injected each fillet with a fluid containing either E. coli, Salmonella, or the fluid without any contaminants. He stuck a sensor into each fillet. Then, he waited.

After about 16 hours, the team observed that the “E” turned from blue to red, only in the fillet contaminated with E. coli, indicating that the sensor accurately detected the bacterial antigens. After several more hours, both the “C” and “E” in all samples turned red, indicating that every fillet had spoiled.

The researchers also found their new sensor indicates contamination and spoilage faster than existing sensors that only detect pathogens on the surface of foods.

“There are many cavities and holes in food where pathogens are embedded, and surface sensors cannot detect these,” Kim says. “So we have to plug in a bit deeper to improve the reliability of the detection. Using this piercing technique, we also don’t have to open a package to inspect food quality.”

The team is looking for ways to speed up the microneedles’ absorption of fluid, as well as the bioinks’ sensing of contaminants. Once the design is optimized, they envision the sensor could be used at various stages along the supply chain, from operators in processing plants, who can use the sensors to monitor products before they are shipped out, to consumers who may choose to apply the sensors on certain foods to make sure they are safe to eat.

This research was supported, in part, by the MIT Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), the U.S. National Science Foundation, and the U.S. Office of Naval Research.



from MIT News https://ift.tt/35w7Ue7
via Gabe's Musing's

Monday, September 7, 2020

Sarah Williams: Applying a data-driven approach to help cities function

Lacking a strong public transit system, residents of Nairobi, Kenya, often get around the city using “matatus” — group taxis following familiar routes. This informal method of transportation is essential to people’s lives: About 3.5 million people in Nairobi regularly use matatus.

Some years ago, around 2012, Sarah Williams became interested in mapping Nairobi’s matatus. Now an associate professor in MIT’s Department of Urban Studies and Planning (DUSP), she helped develop an app that collected data from the vehicles as they circulated around Nairobi, then collaborated with matatu owners and drivers to map the entire network. By 2014, Nairobi’s leaders liked the map so much they started using Williams’ design themselves.

“The city took it on and made it the official [transit] map for the city,” Williams says. Indeed, the Nairobi matatu map is now a common sight — a distant cousin of the London Underground map. “An image has a long life if it’s impactful,” she adds.

That project was a rapid success story — from academic research effort to mass-media use in a couple of years — but for Williams, her work in this area was just getting started. Cities from Amman, Jordan, to Managua, Nicaragua, have been inspired by the project and mapped their own networks, and Williams created a resource center so that even more places could do the same, from the Dominican Republic to Addis Ababa, Ethiopa.

“We’re trying to build a network that supports this work,” says Williams, who is contemplating ways to make the effort its own MIT-based project. “All these people in the network can help each other. But I think it really needs more support. It probably needs to be a full-time nonprofit with a director who is really doing outreach.”

The matatu project hardly exhausts Williams’ interests. As a scholar in DUSP, her forte is conducting data-heavy urban research, which can then be expressed in striking visualizations, ideally generating public interest. Over her career, she has worked with other scholars on an array of topics, including criminal justice, the environment, and housing. 

Notably, Williams was part of the “Million Dollar Blocks” project (along with researchers from Columbia University and the Justice Mapping Center), which mapped the places where residents had been incarcerated, and noted the costs of incarceration. That project helped lend support to the Criminal Justice Reinvestment Act of 2010, which allocated funding for job-training programs for former prisoners; the maps themselves were exhibited at New York’s Musem of Modern Art.

Williams’ “Ghost Cities in China” project shed new light on the country’s urban geography by examining places where the Chinese government had over-developed. By scraping web data and mapping the information, Williams was able to identify areas without amenities — which indicated that they were notably underinhabited. Doing that helped engender new dialogue among international experts about China’s growth and planning practices.

“It is about using data for the public good,” Williams says. “We hear big data is going to change the world, but I don’t believe it will unless we synthesize it into tools with a public benefit. Visualization communicates the insights of data very quickly. The reason I have such a diversity of projects is because I’m interested in how we can bring data into action in multiple areas.”

Williams also has a book coming out in November, “Data Action,” examining these topics as well. “The book brings all these diverse projects into a kind of manifesto for those who want to use data to generate civic change,” Williams says. And she is expanding her teaching portfolio into areas that include ethics and data. For her research and teaching, Williams received tenure from MIT in 2019.

“I was actually doing planning”

Williams grew up in Washington and studied geography and history as an undergraduate at Clark University. That interest has sustained itself throughout her career. It also led to a significant job for her after college, working for one of the pioneering firms developing Geographic Information System (GIS) tools.

“I got them to hire me to pack boxes, and when I left I was a programmer,” Williams recounts.

Still, Williams had other intellectual interests she wanted to pursue as well. “I was always really, really interested in design,” she says. That manifested itself in the form of landscape architecture. Williams initially pursued a master’s degree in the field at the University of Pennsylvania.

Still, there was one problem: A lot of professional opportunities for landscape architects come from private clients, whereas Williams was mostly interested in public-scale projects. She got a job with the city of Philadelphia, in the Office of Watersheds, working on water mitigation designs for public areas — that is, trying to use the landscape to absorb water and prevent harmful runoff on city properties.

Eventually, Williams says, “I realized I was actually doing planning. I realized what planning was, and the impact I wanted to have in communities. So I went to planning school.”

Williams enrolled at MIT, where she received her master’s in city planning from DUSP, and linked together all the elements of her education and work experience.

“I always had this programmer side of me, and the design part of me, and I realized I could have an impact through doing data analysis, and visualizing it and communicating it,” Williams says. “That percolated while I was here.”

After graduation, Williams was hired on the faculty at Columbia University. She joined the MIT faculty in 2014.

Ethics and computing

At MIT, Williams has taught an array of classes about data, design, and planning — and her teaching has branched out recently as well. Last spring, Williams and Eden Medina, an associate professor in the MIT Program in Science, Technology, and Society, team-taught a new course, 11.155J / STS.005J (Data and Society), about the ethics and social implications of data-rich research and business practices.

“I’m really excited about it, because we’re talking about issues of data literacy, privacy, consent, and biases,” Williams says. “Data has a context, always — how you collect it and who you collect it from really tells you what the data is. We want to tell our undergrads that your data, and how you analyze data, has an effect on society.”

That said, Williams has also found that in any course, creating elements about  ethical issues is a crucial part of contemporary pedagogy.

“I try to teach ethics in all my classes,” she says. And with the development of the new MIT Stephen A. Schwarzman College of Computing, Williams’ research and her teaching might appeal to new students who are receptive to an interdisciplinary, data-driven way of examining urban issues.

“I’m so excited about the College of Computing, because it’s about how you bring computing into different fields,” Williams says. “I’m a geographer, I’m an architect, an urban planner, and I’m a data scientist. I mash up these fields together in order to create new insights and try to create an impact on the world.”



from MIT News https://ift.tt/2GwL6jE
via Gabe's Musing's

Thursday, September 3, 2020

3 Questions: Thomas Levenson on a finance scandal for the ages

The subprime mortgage-bond meltdown. The dot-com boom. The Enron fiasco. The last couple of decades have seen their share of finance absurdities and scandals, but such episodes are hardly new. Indeed the most important of them all may be the South Sea Bubble, in which Britain’s South Sea Company floated shares based on the promise of future trade while assuming Britain’s national debt, but then collapsed in 1720, ruining many investors.

And yet, as MIT Professor Thomas Levenson explains in a new book — “Money for Nothing: The Scientists, Fraudsters, and Corrupt Politicians Who Reinvented Money, Panicked a Nation, and Made the World Rich,” just published by Crown — the South Sea Bubble helped shape modern finance and debt markets. MIT News talked to Levenson, a professor in MIT’s Graduate Program in Science Writing and the Comparative Media Studies / Writing program, about his new work.

Q: How did you become interested in the South Sea Bubble, and what is relevant about it today?

A: I was writing a book about Isaac Newton, [“Newton and the Counterfeiter,” 2009], and came upon a stray mention about how he lost money on the South Sea Bubble. I thought: This is really curious. Isaac Newton is the smartest man maybe ever, certainly the smartest of his time. What was he doing losing a lot of money? As I started looking at this quite famous case of stock market exuberance and crash, the more it became evident why it seemed like a good idea at the time to people.

What’s striking is how little has changed. A lot of things we think of as part of our 21st-century financial markets were already there in 1720. Do we still have the same dynamics and pathologies that created that disaster? Yes, absolutely. We’ve gotten cleverer, the math behind the financial markets is more complicated, but the basic architecture of financial crashes and bubbles is similar.

Q: One part of this book is an intellectual history: You look at Newton, the astronomer Edmond Halley, a scholar named William Petty, and other figures who, you contend, helped pave the way for these financial innovations. What’s the connection between their work and the South Sea Bubble?

A: In one way, the single most important takeaway from the book is that although the South Sea Bubble was a disaster for those who lost all their money, it worked. It was the final victory in a revolutionary change in the way Britain, uniquely among the nations, was able to fund its national obligations. It led to the creation of the first modern bond market. If you think of finance as a technology, that’s an incredibly powerful technology. Because it allows you to basically rent money from the future, use that money in the present to do things that help build the wealth of your nation going forward, and thus make the future richer than it otherwise would have been.

To get to that point, there needed to be a change in the way people understood the relationship of numbers to experience. The first part of the book shows how the scientific revolution and the financial revolution are intimately connected. They’re part of the same phenomenon, populated in part by the same people and driven by similar habits of thinking. The core idea is that empiricism and quantification allow you to apply disciplined reasoning, in the form of mathematics, to come up with insights that are available no other way. 

Petty, a polymath who was a founder of the Royal Society, explicitly applied that doctrine of numbers, measure, and observation to practical problems [such as assessing the wealth of Ireland]. Edmond Halley applied calculations to provide the basis for life insurance. Yes, the scientific revolution involved things like what governs the motion of Jupiter, but it’s also: How should we think about probability and risk in human life? Isaac Newton wrote fairly well-thought-out memos on credit.

Q: All right, lightning round here. Who is the hero and who is the villain of the piece? What surprised you most about the South Sea Bubble? Who are your ideal readers?

A. There are no disinterested noble characters. The chief villain of the day is John Blunt, the secretary of the South Sea Company, the public face and one of the chief architects of the scheme. And it’s true he wanted to get rich and was unscrupulous in defense of the company. But I don’t think he set out to defraud the nation. He got on a horse that bolted and stayed on as long as he could. The great hero for me is clearly Robert Walpole, the parliamentary figure often seen as the first true prime minister in the British system of governance. In a sense he was lucky; if he’d been in power [when the scheme started], he might have been bribed. But he was a driven and devoted political leader who understood the problem well and worked his way toward a response. Like all his peers, he was perfectly happy with the ordinary corruption of the time. He got rich holding office. Just like Blount wasn’t all bad, Walpole was not someone you’d entirely admire.

It surprised me that the sense of human passions around money felt so familiar. Newton was a formidable intellect who had the mathematical knowledge in his fingertips to reason his way to the flaw in the South Sea plan. Other people did that. Newton didn’t, because he was a human being and got caught up in the money mania. Even somebody with his focus and concentration and seeming detachment from human passions was still vulnerable to the same excitement.

My hope for this book is that it would build bridges between different groups: people who like history and want to understand how the past makes the present; people who want to understand how science works; and, I hope, a lot of people who want to understand ideas about finance and money. The book is in some ways an extended meditation on how money changes its character over time.



from MIT News https://ift.tt/2DqtoNA
via Gabe's Musing's

Wednesday, September 2, 2020

A “bang” in LIGO and Virgo detectors signals most massive gravitational-wave source yet

For all its vast emptiness, the universe is humming with activity in the form of gravitational waves. Produced by extreme astrophysical phenomena, these reverberations ripple forth and shake the fabric of space-time, like the clang of a cosmic bell.

Now researchers have detected a signal from what may be the most massive black hole merger yet observed in gravitational waves. The product of the merger is the first clear detection of an “intermediate-mass” black hole, with a mass between 100 and 1,000 times that of the sun.

They detected the signal, which they have labeled GW190521, on May 21, 2019, with the National Science Foundation’s Laser Interferometer Gravitational-wave Observatory (LIGO), a pair of identical, 4-kilometer-long interferometers in the United States; and Virgo, a 3-kilometer-long detector in Italy.

The signal, resembling about four short wiggles, is extremely brief in duration, lasting less than one-tenth of a second. From what the researchers can tell, GW190521 was generated by a source that is roughly 5 gigaparsecs away, when the universe was about half its age, making it one of the most distant gravitational-wave sources detected so far.

As for what produced this signal, based on a powerful suite of state-of-the-art computational and modeling tools, scientists think that GW190521 was most likely generated by a binary black hole merger with unusual properties.

Almost every confirmed gravitational-wave signal to date has been from a binary merger, either between two black holes or two neutron stars. This newest merger appears to be the most massive yet, involving two inspiraling black holes with masses about 85 and 66 times the mass of the sun.

The LIGO-Virgo team has also measured each black hole’s spin and discovered that as the black holes were circling ever closer together, they could have been spinning about their own axes, at angles that were out of alignment with the axis of their orbit. The black holes’ misaligned spins likely caused their orbits to wobble, or “precess,” as the two Goliaths spiraled toward each other.

The new signal likely represents the instant that the two black holes merged. The merger created an even more massive black hole, of about 142 solar masses, and released an enormous amount of energy, equivalent to around 8 solar masses,      spread across the universe in the form of gravitational waves.

“This doesn’t look much like a chirp, which is what we typically detect,” says Virgo member Nelson Christensen, a researcher at the French National Centre for Scientific Research (CNRS), comparing the signal to LIGO’s first detection of gravitational waves in 2015. “This is more like something that goes ‘bang,’ and it’s the most massive signal LIGO and Virgo have seen.”

The international team of scientists, who make up the LIGO Scientific Collaboration (LSC) and the Virgo Collaboration, have reported their findings in two papers published today. One, appearing in Physical Review Letters, details the discovery, and the other, in The Astrophysical Journal Letters, discusses the signal’s physical properties and astrophysical implications.

“LIGO once again surprises us not just with the detection of black holes in sizes that are difficult to explain, but doing it using techniques that were not designed specifically for stellar mergers,” says Pedro Marronetti, program director for gravitational physics at the National Science Foundation. “This is of tremendous importance since it showcases the instrument’s ability to detect signals from completely unforeseen astrophysical events. LIGO shows that it can also observe the unexpected.”

In the mass gap

The uniquely large masses of the two inspiraling black holes, as well as the final black hole, raise a slew of questions regarding their formation.

All of the black holes observed to date fit within either of two categories: stellar-mass black holes, which measure from a few solar masses up to tens of solar masses and are thought to form when massive stars die; or supermassive black holes, such as the one at the center of the Milky Way galaxy, that are from hundreds of thousands, to billions of times that of our sun.

However, the final 142-solar-mass black hole produced by the GW190521 merger lies within an intermediate mass range between stellar-mass and supermassive black holes — the first of its kind ever detected.

The two progenitor black holes that produced the final black hole also seem to be unique in their size. They’re so massive that scientists suspect one or both of them may not have formed from a collapsing star, as most stellar-mass black holes do.

According to the physics of stellar evolution, outward pressure from the photons and gas in a star’s core support it against the force of gravity pushing inward, so that the star is stable, like the sun. After the core of a massive star fuses nuclei as heavy as iron, it can no longer produce enough pressure to support the outer layers. When this outward pressure is less than gravity, the star collapses under its own weight, in an explosion called a core-collapse supernova, that can leave behind a black hole.

This process can explain how stars as massive as 130 solar masses can produce black holes that are up to 65 solar masses. But for heavier stars, a phenomenon known as “pair instability” is thought to kick in. When the core’s photons become extremely energetic, they can morph into an electron and antielectron pair. These pairs generate less pressure than photons, causing the star to become unstable against gravitational collapse, and the resulting explosion is strong enough to leave nothing behind. Even more massive stars, above 200 solar masses, would      eventually collapse directly into a black hole of at least 120 solar masses. A collapsing star, then, should not be able to produce a black hole between approximately 65 and 120 solar masses — a range that is known as the “pair instability mass gap.”

But now, the heavier of the two black holes that produced the GW190521 signal, at 85 solar masses, is the first so far detected within the pair instability mass gap.

“The fact that we’re seeing a black hole in this mass gap will make a lot of astrophysicists scratch their heads and try to figure out how these black holes were made,” says Christensen, who is the director of the Artemis Laboratory at the Nice Observatory in France.

One possibility, which the researchers consider in their second paper, is of a hierarchical merger, in which the two progenitor black holes themselves may have formed from the merging of two smaller black holes, before migrating together and eventually merging.

“This event opens more questions than it provides answers,” says LIGO member Alan Weinstein, professor of physics at Caltech. “From the perspective of discovery and physics, it’s a very exciting thing.”

“Something unexpected”

There are many remaining questions regarding GW190521.

As LIGO and Virgo detectors listen for gravitational waves passing through Earth, automated searches comb through the incoming data for interesting signals. These searches can use two different methods: algorithms that pick out specific wave patterns in the data that may have been produced by compact binary systems; and more general “burst” searches, which essentially look for anything out of the ordinary.

LIGO member Salvatore Vitale, assistant professor of physics at MIT, likens compact binary searches to “passing a comb through data, that will catch things in a certain spacing,” in contrast to burst searches that are more of a “catch-all” approach.

In the case of GW190521, it was a burst search that picked up the signal slightly more clearly, opening the very small chance that the gravitational waves arose from something other than a binary merger.

“The bar for asserting we’ve discovered something new is very high,” Weinstein says. “So we typically apply Occam’s razor: The simpler solution is the better one, which in this case is a binary black hole.”

But what if something entirely new produced these gravitational waves? It’s a tantalizing prospect, and in their paper the scientists briefly consider other sources in the universe that might have produced the signal they detected. For instance, perhaps the gravitational waves were emitted by a collapsing star in our galaxy. The signal could also be from a cosmic string produced just after the universe inflated in its earliest moments — although neither of these exotic possibilities matches the data as well as a binary merger.

“Since we first turned on LIGO, everything we’ve observed with confidence has been a collision of black holes or neutron stars,” Weinstein says “This is the one event where our analysis allows the possibility that this event is not such a collision.  Although this event is consistent with being from an exceptionally massive binary black hole merger, and alternative explanations are disfavored, it is pushing the boundaries of our confidence. And that potentially makes it extremely exciting. Because we have all been hoping for something new, something unexpected, that could challenge what we’ve learned already. This event has the potential for doing that.”

This research was funded by the U.S. National Science Foundation.



from MIT News https://ift.tt/3lJuFR5
via Gabe's Musing's

An unexpected origin story for a lopsided black hole merger

A lopsided merger of two black holes may have an oddball origin story, according to a new study by researchers at MIT and elsewhere.

The merger was first detected on April 12, 2019 as a gravitational wave that arrived at the detectors of both LIGO (the Laser Interferometer Gravitational-wave Observatory), and its Italian counterpart, Virgo. Scientists labeled the signal as GW190412 and determined that it emanated from a clash between two David-and-Goliath black holes, one three times more massive than the other. The signal marked the first detection of a merger between two black holes of very different sizes.

Now the new study, published today in the journal Physical Review Letters, shows that this lopsided merger may have originated through a very different process compared to how most mergers, or binaries, are thought to form.

It’s likely that the more massive of the two black holes was itself a product of a prior merger between two parent black holes. The Goliath that spun out of that first collision may have then ricocheted around a densely packed “nuclear cluster” before merging with the second, smaller black hole — a raucous event that sent gravitational waves rippling across space.

GW190412 may then be a second generation, or “hierarchical” merger, standing apart from other first-generation mergers that LIGO and Virgo have so far detected.

“This event is an oddball the universe has thrown at us — it was something we didn’t see coming,” says study coauthor Salvatore Vitale, an assistant professor of physics at MIT and a LIGO member. “But nothing happens just once in the universe. And something like this, though rare, we will see again, and we’ll be able to say more about the universe.”

Vitale’s coauthors are Davide Gerosa of the University of Birmingham and Emanuele Berti of Johns Hopkins University.

A struggle to explain

There are two main ways in which black hole mergers are thought to form. The first is known as a common envelope process, where two neighboring stars, after billions of years, explode to form two neighboring black holes that eventually share a common envelope, or disk of gas. After another few billion years, the black holes spiral in and merge.

“You can think of this like a couple being together all their lives,” Vitale says. “This process is suspected to happen in the disc of galaxies like our own.”

The other common path by which black hole mergers form is via dynamical interactions. Imagine, in place of a monogamous environment, a galactic rave, where thousands of black holes are crammed into a small, dense region of the universe. When two black holes start to partner up, a third may knock the couple apart in a dynamical interaction that can repeat many times over, before a pair of black holes finally merges.

In both the common envelope process and the dynamical interaction scenario, the merging black holes should have roughly the same mass, unlike the lopsided mass ratio of GW190412. They should also have relatively no spin, whereas GW190412 has a surprisingly high spin.

“The bottom line is, both these scenarios, which people traditionally think are ideal nurseries for black hole binaries in the universe, struggle to explain the mass ratio and spin of this event,” Vitale says.

Black hole tracker

In their new paper, the researchers used two models to show that it is very unlikely that GW190412 came from either a common envelope process or a dynamical interaction.

They first modeled the evolution of a typical galaxy using STAR TRACK, a simulation that tracks galaxies over billions of years, starting with the coalescing of gas and proceeding to the way stars take shape and explode, and then collapse into black holes that eventually merge. The second model simulates random, dynamical encounters in globular clusters — dense concentrations of stars around most galaxies.

The team ran both simulations multiple times, tuning the parameters and studying the properties of the black hole mergers that emerged. For those mergers that formed through a common envelope process, a merger like GW190412 was very rare, cropping up only after a few million events. Dynamical interactions were slightly more likely to produce such an event, after a few thousand mergers.

However, GW190412 was detected by LIGO and Virgo after only 50 other detections, suggesting that it likely arose through some other process.

“No matter what we do, we cannot easily produce this event in these more common formation channels,” Vitale says.

The process of hierarchical merging may better explain the GW190412’s lopsided mass and its high spin. If one black hole was a product of a previous pairing of two parent black holes of similar mass, it would itself be more massive than either parent, and later significantly overshadow its first-generation partner, creating a high mass ratio in the final merger.

A hierarchical process could also generate a merger with a high spin: The parent black holes, in their chaotic merging, would spin up the resulting black hole, which would then carry this spin into its own ultimate collision.

“You do the math, and it turns out the leftover black hole would have a spin which is very close to the total spin of this merger,” Vitale explains.

No escape

If GW190412 indeed formed through hierarchical merging, Vitale says the event could also shed light on the environment in which it formed. The team found that if the larger of the two black holes formed from a previous collision, that collision likely generated a huge amount of energy that not only spun out a new black hole, but kicked it across some distance.

“If it’s kicked too hard, it would just leave the cluster and go into the empty interstellar medium, and not be able to merge again,” Vitale says.

If the object was able to merge again (in this case, to produce GW190412), it would mean the kick that it received was not enough to escape the stellar cluster in which it formed. If GW190412 indeed is a product of hierarchical merging, the team calculated that it would have occurred in an environment with an escape velocity higher than 150 kilometers per second. For perspective, the escape velocity of most globular clusters is about 50 kilometers per second.

This means that whatever environment GW190412 arose from had an immense gravitational pull, and the team believes that such an environment could have been either the disk of gas around a supermassive black hole, or a “nuclear cluster” — an incredibly dense region of the universe, packed with tens of millions of stars.

“This merger must have come from an unusual place,” Vitale says. “As LIGO and Virgo continue to make new detections, we can use these  discoveries to learn new things about the universe.”

This research was funded, in part by the U.S. National Science Foundation and MIT’s Solomon Buchsbaum Research Fund.



from MIT News https://ift.tt/32VfT1o
via Gabe's Musing's

Tuesday, September 1, 2020

Listening to immigrant and indigenous Pacific Islander voices

After Kevin Lujan Lee came out to his parents, he found another family in Improving Dreams, Equality, Access, and Success (IDEAS), an undocumented student advocacy and support group at the University of California at Los Angeles. After joining the organization to support his undocumented partner at the time, he fell in love with the group and community around it, and became involved in organizing alongside undocumented youth. When Lee found himself struggling to make ends meet upon graduation, it was his then-partner’s parents who took him in and cared for him despite their limited means and the constant threat of deportation.

“I would be nothing if it weren’t for IDEAS, if it weren’t for undocumented people sheltering me and giving me food,” Lee says. “This was the spirit that the group embodied. People who were more than willing to just fork out what they didn’t have.”

The third-year PhD candidate credits the family and mission of IDEAS for every subsequent step he has taken, from his master’s degree at the University of Chicago to his current research in MIT’s Department of Urban Studies and Planning.

“I really don’t think of myself as a researcher,” Lee says. “I’m first and foremost an organizer. That’s where I gained my purpose. That’s where I learned what love is in its most unconditional and revolutionary form.”

Lee’s priority is to give back that sense of unconditional love to the communities who have made him who he is, a scholar with broad interests in equitable community and economic development. His research has ranged from the political engagement of Pacific Islanders in Hawai‘i and GuÃ¥han, to the role of worker centers — many of which serve undocumented immigrants in the informal economy –– in California’s workforce development system.

“When you work with people and in geographies that are invisible, you need power,” Lee says. “That’s why I’m here.”

“You have to make a decision”

The people Lee met through IDEAS have become crucial points of access that make his research possible on a day-to-day basis. Organizing has always built itself on interpersonal trust, and for Lee, shared connections in the immigrant labor world allow him to engage with the organizations he studies. For instance, it was through a collaboration with Sasha Feldstein at the California Immigrant Policy Center that he pursued his first research project as an MIT student, which looked at the way gaps in organizational networks prevent marginalized populations from accessing the resources that are supposedly for them.

When it comes to workforce development, social scientists have identified the problem of “creaming,” wherein nonprofits focus their resources on the populations that are easiest to serve in the face of tightened budgets, leaving behind the most marginalized in the process.

Inspired by the centrality of interpersonal relationships in the organizing world, Lee identified an additional, network-based mechanism through which the most marginalized are excluded from workforce development nonprofits. He calls this “structural creaming.” One of the ways in which federally funded job training providers exclude marginalized populations, he says, is by failing to establish or maintain relationships with smaller nonprofits specifically oriented toward meeting those populations’ needs.

Without those relationships, such providers simply don’t reach these populations, and if they do, they might not provide them with the appropriate support or refer them to the right employers. As a result, people slip through the cracks. Small-scale service providers don’t receive the funding they could use to provide greater assistance to the marginalized populations they already reach.

“These small-scale organizations are not really in conversation with the mainstream workforce development systems,” explains Lee, who works on this issue along with Ana Luz Gonzalez-Vasquez and Magaly López at the UCLA Labor Center. “They often have a fraught relationship with American Job Centers [federally funded job-training providers under the U.S. Department of Labor]. And, these organizations are not often studied by economists, who wield tremendous influence in workforce development policymaking.”

Conversations about workforce development for the most marginalized need to move away from prioritizing the “average client,” scalability, and cost efficiency, Lee says.

“You have to make a decision,” he says of the nonprofits and agencies in this field. “Serving immigrants is not always cost-effective. It requires conducting targeted outreach, providing English as Second Language classes, offering ongoing support to address barriers to program participation, and high-quality employment — it’s a difficult process. But if you care about these populations, that’s what you'll do.”

A life of many edges

An organizing-first approach was what brought Lee to MIT’s Department of Urban Studies and Planning (DUSP) to begin with. Advised by Associate Professor Justin Steil, Lee feels the department’s emphasis on interdisciplinary, applied research has allowed him to pursue his interests within the context of the academy.

“What he provides is unconditional support, a ready smile, and a lot of space to do what I want,” Lee says of his advisor. “There are a lot of wonderful junior faculty who are phenomenal, and I’m very grateful to them.”

Lee’s ability to range broadly over his interests in immigrant rights and equitable development has led him toward several collaborative projects on an issue that reaches into his own ancestry and past: indigenous Pacific Islander sovereignty.

An initial project with the Center for Pacific Island Studies at the University of Hawai‘i at Mānoa prompted him to learn more about indigenous sovereignty and colonialism. Now, he is collaborating with Ngoc Phan, a political scientist at Hawai‘i Pacific University, to analyze her Native Hawaiian Survey. And, alongside Patrick Thomsen of the University of Auckland and Lana Lopesi of the Auckland University of Technology, he is theorizing Pacific Islander mobilities. The Pacific has many “edges,” Lee says, alluding to the work of Pacific Studies scholar and activist Teresia Teaiwa, who emphasizes the deep heterogeneity of land, history, culture, language, religion and spirituality across the Pacific.

Within his own life, Lee has been grappling with the relationship between indigeneity and his own position as an immigrant and settler on Native American lands. Lee is an indigenous Pacific Islander himself; his mother is CHamoru, from GuÃ¥han, and his father is Chinese, from Malaysia. Growing up in Malaysia, he remembers how his mother maintained her relationship to her family and to the island, in the face of a hierarchical and colorist society where she was required to overcome inordinate obstacles as a young mother. Through his research, he aims to reconnect to his Pacific Islander heritage and takes inspiration from his mother’s resilience.

“She is my connection,” he says. “She continuously demonstrates what it for me what it means to be CHamoru and what it means to be an Islander. It’s to have strength, and to stay connected to your homeland. You do it because it’s in your blood. You just have to.”

Since at least 2010, DUSP has only ever enrolled one PhD student who identifies as “Native Hawaiian or Other Pacific Islander” — Lee himself. Thus, he feels a responsibility to make sure he is not the last Pacific Islander to come through his department. Inspired by the recent release of the Black DUSP Thesis, he also works alongside his colleagues to advance equity within his department. In the future, he hopes to help establish a pipeline of Pacific Islanders into urban planning.

“Sovereignty movements are very much alive in the Pacific, and people are trying to build their nations,” Lee says. “But there is no pipeline for Pacific Islanders into urban planning. How are you going to engage the World Bank and the Asian Development Bank about measures of development, how are you going to talk about community control, how are you going to talk about the military’s role in land use, if you don’t have these skills?”

With both his academic and organizing work, Lee acknowledges he has a lot on his plate. “I am deeply imperfect and often thinly stretched,” he says. “But when things matter so deeply in your bones, the energy just comes. It has to.”



from MIT News https://ift.tt/3hQdtXP
via Gabe's Musing's

Monday, August 31, 2020

Making health care more personal

The health care system today largely focuses on helping people after they have problems. When they do receive treatment, it’s based on what has worked best on average across a huge, diverse group of patients.

Now the company Health at Scale is making health care more proactive and personalized — and, true to its name, it’s doing so for millions of people.

Health at Scale uses a new approach for making care recommendations based on new classes of machine-learning models that work even when only small amounts of data on individual patients, providers, and treatments are available.

The company is already working with health plans, insurers, and employers to match patients with doctors. It’s also helping to identify people at rising risk of visiting the emergency department or being hospitalized in the future, and to predict the progression of chronic diseases. Recently, Health at Scale showed its models can identify people at risk of severe respiratory infections like influenza or pneumonia, or, potentially, Covid-19.

“From the beginning, we decided all of our predictions would be related to achieving better outcomes for patients,” says John Guttag, chief technology officer of Health at Scale and the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT. “We’re trying to predict what treatment or physician or intervention would lead to better outcomes for people.”

A new approach to improving health

Health at Scale co-founder and CEO Zeeshan Syed met Guttag while studying electrical engineering and computer science at MIT. Guttag served as Syed’s advisor for his bachelor’s and master’s degrees. When Syed decided to pursue his PhD, he only applied to one school, and his advisor was easy to choose.

Syed did his PhD through the Harvard-MIT Program in Health Sciences and Technology (HST). During that time, he looked at how patients who’d had heart attacks could be better managed. The work was personal for Syed: His father had recently suffered a serious heart attack.

Through the work, Syed met Mohammed Saeed SM ’97, PhD ’07, who was also in the HST program. Syed, Guttag, and Saeed founded Health at Scale in 2015 along with  David Guttag ’05, focusing on using core advances in machine learning to solve some of health care’s hardest problems.

“It started with the burning itch to address real challenges in health care about personalization and prediction,” Syed says.

From the beginning, the founders knew their solutions needed to work with widely available data like health care claims, which include information on diagnoses, tests, prescriptions, and more. They also sought to build tools for cleaning up and processing raw data sets, so that their models would be part of what Guttag refers to as a “full machine-learning stack for health care.”

Finally, to deliver effective, personalized solutions, the founders knew their models needed to work with small numbers of encounters for individual physicians, clinics, and patients, which posed severe challenges for conventional AI and machine learning.

“The large companies getting into [the health care AI] space had it wrong in that they viewed it as a big data problem,” Guttag says. “They thought, ‘We’re the experts. No one’s better at crunching large amounts of data than us.’ We thought if you want to make the right decision for individuals, the problem was a small data problem: Each patient is different, and we didn’t want to recommend to patients what was best on average. We wanted what was best for each individual.”

The company’s first models helped recommend skilled nursing facilities for post-acute care patients. Many such patients experience further health problems and return to the hospital. Health at Scale’s models showed that some facilities were better at helping specific kinds of people with specific health problems. For example, a 64-year-old man with a history of cardiovascular disease may fare better at one facility compared to another.

Today the company’s recommendations help guide patients to the primary care physicians, surgeons, and specialists that are best suited for them. Guttag even used the service when he got his hip replaced last year.

Health at Scale also helps organizations identify people at rising risk of specific adverse health events, like heart attacks, in the future.

“We’ve gone beyond the notion of identifying people who have frequently visited emergency departments or hospitals in the past, to get to the much more actionable problem of finding those people at an inflection point, where they are likely to experience worse outcomes and higher costs,” Syed says.

The company’s other solutions help determine the best treatment options for patients and help reduce health care fraud, waste, and abuse. Each use case is designed to improve patient health outcomes by giving health care organizations decision-support for action.

“Broadly speaking, we are interested in building models that can be used to help avoid problems, rather than simply predict them,” says Guttag. “For example, identifying those individuals at highest risk for serious complications of a respiratory infection [enables care providers] to target them for interventions that reduce their chance of developing such an infection.”

Impact at scale

Earlier this year, as the scope of the Covid-19 pandemic was becoming clear, Health at Scale began considering ways its models could help.

“The lack of data in the beginning of the pandemic motivated us to look at the experiences we have gained from combatting other respiratory infections like influenza and pneumonia,” says Saeed, who serves as Health at Scale’s chief medical officer.

The idea led to a peer-reviewed paper where researchers affiliated with the company, the University of Michigan, and MIT showed Health at Scale’s models could accurately predict hospitalizations and visits to the emergency department related to respiratory infections.

“We did the work on the paper using the tech we’d already built,” Guttag says. “We had interception products deployed for predicting patients at-risk of emergent hospitalizations for a variety of causes, and we saw that we could extend that approach. We had customers that we gave the solution to for free.”

The paper proved out another use case for a technology that is already being used by some of the largest health plans in the U.S. That’s an impressive customer base for a five-year-old company of only 20 people — about half of which have MIT affiliations.

“The culture MIT creates to solve problems that are worth solving, to go after impact, I think that’s been reflected in the way the company got together and has operated,” Syed says. “I’m deeply proud that we’ve maintained that MIT spirit.”

And, Syed believes, there’s much more to come.

“We set out with the goal of driving impact,” Syed says. “We currently run some of the largest production deployments of machine learning at scale, affecting millions, if not tens of millions, of patients, and we  are only just getting started.”



from MIT News https://ift.tt/31LWlNC
via Gabe's Musing's

Sunday, August 30, 2020

Robot takes contact-free measurements of patients’ vital signs

The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.

During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.

The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.

“In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”

Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.

“We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

Measuring vital signs

When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.

The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.

To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.

The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.

Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.

The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.

“We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”

Continuous monitoring

In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.

While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.

The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship.



from MIT News https://ift.tt/3ly3YPl
via Gabe's Musing's

Wednesday, August 26, 2020

National Science Foundation announces MIT-led Institute for Artificial Intelligence and Fundamental Interactions

The U.S. National Science Foundation (NSF) announced today an investment of more than $100 million to establish five artificial intelligence (AI) institutes, each receiving roughly $20 million over five years. One of these, the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), will be led by MIT’s Laboratory for Nuclear Science (LNS) and become the intellectual home of more than 25 physics and AI senior researchers at MIT and Harvard, Northeastern, and Tufts universities. 

By merging research in physics and AI, the IAIFI seeks to tackle some of the most challenging problems in physics, including precision calculations of the structure of matter, gravitational-wave detection of merging black holes, and the extraction of new physical laws from noisy data.

“The goal of the IAIFI is to develop the next generation of AI technologies, based on the transformative idea that artificial intelligence can directly incorporate physics intelligence,” says Jesse Thaler, an associate professor of physics at MIT, LNS researcher, and IAIFI director.  “By fusing the ‘deep learning’ revolution with the time-tested strategies of ‘deep thinking’ in physics, we aim to gain a deeper understanding of our universe and of the principles underlying intelligence.”

IAIFI researchers say their approach will enable making groundbreaking physics discoveries, and advance AI more generally, through the development of novel AI approaches that incorporate first principles from fundamental physics.  

“Invoking the simple principle of translational symmetry — which in nature gives rise to conservation of momentum — led to dramatic improvements in image recognition,” says Mike Williams, an associate professor of physics at MIT, LNS researcher, and IAIFI deputy director. “We believe incorporating more complex physics principles will revolutionize how AI is used to study fundamental interactions, while simultaneously advancing the foundations of AI.”

In addition, a core element of the IAIFI mission is to transfer their technologies to the broader AI community.

“Recognizing the critical role of AI, NSF is investing in collaborative research and education hubs, such as the NSF IAIFI anchored at MIT, which will bring together academia, industry, and government to unearth profound discoveries and develop new capabilities,” says NSF Director Sethuraman Panchanathan. “Just as prior NSF investments enabled the breakthroughs that have given rise to today’s AI revolution, the awards being announced today will drive discovery and innovation that will sustain American leadership and competitiveness in AI for decades to come.”

Research in AI and fundamental interactions

Fundamental interactions are described by two pillars of modern physics: at short distances by the Standard Model of particle physics, and at long distances by the Lambda Cold Dark Matter model of Big Bang cosmology. Both models are based on physical first principles such as causality and space-time symmetries.  An abundance of experimental evidence supports these theories, but also exposes where they are incomplete, most pressingly that the Standard Model does not explain the nature of dark matter, which plays an essential role in cosmology.

AI has the potential to help answer these questions and others in physics.

For many physics problems, the governing equations that encode the fundamental physical laws are known. However, undertaking key calculations within these frameworks, as is essential to test our understanding of the universe and guide physics discovery, can be computationally demanding or even intractable. IAIFI researchers are developing AI for such first-principles theory studies, which naturally require AI approaches that rigorously encode physics knowledge. 

“My group is developing new provably exact algorithms for theoretical nuclear physics,” says Phiala Shanahan, an assistant professor of physics and LNS researcher at MIT. “Our first-principles approach turns out to have applications in other areas of science and even in robotics, leading to exciting collaborations with industry partners.”

Incorporating physics principles into AI could also have a major impact on many experimental applications, such as designing AI methods that are more easily verifiable. IAIFI researchers are working to enhance the scientific potential of various facilities, including the Large Hadron Collider (LHC) and the Laser Interferometer Gravity Wave Observatory (LIGO). 

“Gravitational-wave detectors are among the most sensitive instruments on Earth, but the computational systems used to operate them are mostly based on technology from the previous century,” says Principal Research Scientist Lisa Barsotti of the MIT Kavli Institute for Astrophysics and Space Research. “We have only begun to scratch the surface of what can be done with AI; just enough to see that the IAIFI will be a game-changer.”

The unique features of these physics applications also offer compelling research opportunities in AI more broadly. For example, physics-informed architectures and hardware development could lead to advances in the speed of AI algorithms, and work in statistical physics is providing a theoretical foundation for understanding AI dynamics. 

“Physics has inspired many time-tested ideas in machine learning: maximizing entropy, Boltzmann machines, and variational inference, to name a few,” says Pulkit Agrawal, an assistant professor of electrical engineering and computer science at MIT, and researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We believe that close interaction between physics and AI researchers will be the catalyst that leads to the next generation of machine learning algorithms.” 

Cultivating early-career talent

AI technologies are advancing rapidly, making it both important and challenging to train junior researchers at the intersection of physics and AI. The IAIFI aims to recruit and train a talented and diverse group of early-career researchers, including at the postdoc level through its IAIFI Fellows Program.  

“By offering our fellows their choice of research problems, and the chance to focus on cutting-edge challenges in physics and AI, we will prepare many talented young scientists to become future leaders in both academia and industry,” says MIT professor of physics Marin Soljacic of the Research Laboratory of Electronics (RLE). 

IAIFI researchers hope these fellows will spark interdisciplinary and multi-investigator collaborations, generate new ideas and approaches, translate physics challenges beyond their native domains, and help develop a common language across disciplines. Applications for the inaugural IAIFI fellows are due in mid-October. 

Another related effort spearheaded by Thaler, Williams, and Alexander Rakhlin, an associate professor of brain and cognitive science at MIT and researcher in the Institute for Data, Systems, and Society (IDSS), is the development of a new interdisciplinary PhD program in physics, statistics, and data science, a collaborative effort between the Department of Physics and the Statistics and Data Science Center.

“Statistics and data science are among the foundational pillars of AI. Physics joining the interdisciplinary doctoral program will bring forth new ideas and areas of exploration, while fostering a new generation of leaders at the intersection of physics, statistics, and AI," says Rakhlin.  

Education, outreach, and partnerships 

The IAIFI aims to cultivate “human intelligence” by promoting education and outreach. For example, IAIFI members will contribute to establishing a MicroMasters degree program at MIT for students from non-traditional backgrounds.    

“We will increase the number of students in both physics and AI from underrepresented groups by providing fellowships for the MicroMasters program,” says Isaac Chuang, professor of physics and electrical engineering, senior associate dean for digital learning, and RLE researcher at MIT. “We also plan on working with undergraduate MIT Summer Research Program students, to introduce them to the tools of physics and AI research that they might not have access to at their home institutions.”

The IAIFI plans to expand its impact via numerous outreach efforts, including a K-12 program in which students are given data from the LHC and LIGO and tasked with rediscovering the Higgs boson and gravitational waves. 

“After confirming these recent Nobel Prizes, we can ask the students to find tiny artificial signals embedded in the data using AI and fundamental physics principles,” says assistant professor of physics Phil Harris, an LNS researcher at MIT. “With projects like this, we hope to disseminate knowledge about — and enthusiasm for — physics, AI, and their intersection.”

In addition, the IAIFI will collaborate with industry and government to advance the frontiers of both AI and physics, as well as societal sectors that stand to benefit from AI innovation. IAIFI members already have many active collaborations with industry partners, including DeepMind, Microsoft Research, and Amazon. 

“We will tackle two of the greatest mysteries of science: how our universe works and how intelligence works,” says MIT professor of physics Max Tegmark, an MIT Kavli Institute researcher. “Our key strategy is to link them, using physics to improve AI and AI to improve physics. We're delighted that the NSF is investing the vital seed funding needed to launch this exciting effort.”

Building new connections at MIT and beyond

Leveraging MIT’s culture of collaboration, the IAIFI aims to generate new connections and to strengthen existing ones across MIT and beyond.

Of the 27 current IAIFI senior investigators, 16 are at MIT and members of the LNS, RLE, MIT Kavli Institute, CSAIL, and IDSS. In addition, IAIFI investigators are members of related NSF-supported efforts at MIT, such as the Center for Brains, Minds, and Machines within the McGovern Institute for Brain Research and the MIT-Harvard Center for Ultracold Atoms.  

“We expect a lot of creative synergies as we bring physics and computer science together to study AI," says Bill Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and researcher in CSAIL. "I'm excited to work with my physics colleagues on topics that bridge these fields."

More broadly, the IAIFI aims to make Cambridge, Massachusetts, and the surrounding Boston area a hub for collaborative efforts to advance both physics and AI. 

“As we teach in 8.01 and 8.02, part of what makes physics so powerful is that it provides a universal language that can be applied to a wide range of scientific problems,” says Thaler. “Through the IAIFI, we will create a common language that transcends the intellectual borders between physics and AI to facilitate groundbreaking discoveries.”



from MIT News https://ift.tt/3lmjwFK
via Gabe's Musing's