Translate

Pages

Pages

Pages

Intro Video
Showing posts with label MIT News. Show all posts
Showing posts with label MIT News. Show all posts

Thursday, November 12, 2020

Mary Frances Wagley, dedicated educator and the first woman to join the MIT Corporation, dies at 93

Mary Frances Wagley ’47, a trailblazer for women and a lifelong educator, died Nov. 1 at her home in Cockeysville, Maryland. She was 93.

Having attended MIT at a time when there were few female students — only 12 in her class — Wagley became the first woman to be an MIT Corporation member and the first woman to serve as president of the MIT Alumni Association.

“Mary Frances Wagley was a force for a better world and a pioneer for women in science and technology. She set an example with both her intellect and her leadership across an inspiring and impactful life. Everyone at MIT is fortunate to be benefitting from her path-breaking footsteps,” says MIT Corporation Chair Diane Greene, who is the first woman to serve in this role.

Wagley was born in New York City to Caroline and James Cash Penney, the founder of JC Penney. She grew up in White Plains on a small farm, often working outdoors with her father or riding her horse. Her skill as an equestrienne brought her as far as Madison Square Garden, where she competed at the National Horse Show.

Applying to MIT, one of the deans tried in an interview to talk her out of attending, saying he was sure she wouldn’t like it, Wagley told “MIT Infinite History” in 2009.

“Well, I proved him wrong,” Wagley told the interviewer. “I was happy from the moment I stepped foot in the Institute. … I was just ready to soak up all I could learn, and from the day I walked in those doors at 77 Mass. Avenue, it just seemed to me this is the place I belong.”

Wagley represented the first generation in her family to attend college. Wagley and a friend, Emily “Paddy” Wade ’45, lived off campus and cooked for themselves because there were no dormitories or dining facilities for women. Nor did athletic facilities exist for women at the Institute. Starting out as a chemical engineering major, Wagley was not able to participate in the required chemical engineering summer camp because of her gender, and she was asked to change her major to chemistry, which she did. Despite the challenges associated with being one of very few women at MIT, however, she flourished, says her son, Jay Wagley SM ’89.

“My mom was a force. I think it was hard to be one of only 12 in her class, but she never shied away from a challenge,” he says. “She had a spectacular mind and enormous intellectual curiosity. I think having gone to MIT and having done well there gave my mom tremendous confidence.”

MIT also imbued Wagley with a sense of the importance of science and engineering in society. Speaking with a reporter for the “MIT News” section of MIT Technology Review, she recalled how on V-E Day, May 8, 1945, MIT President Karl Compton celebrated the Allied victory with students but then sent them back to class, telling them their skills were needed for the continued fighting in the Pacific and for reconstruction after the war.

“I guess this was the first time I felt important,” Wagley was quoted as saying.

After graduating from MIT, Wagley went directly to Oxford University, earning a doctoral degree there in physical chemistry. At the time of her return to the United States, she had two employment offers, in research at Princeton University and teaching at Smith College. She chose Smith and discovered that she loved teaching, delighting in finding ways to make concepts clear to students.

“I was earnest about trying to get what I knew across to the students in a way that they could grasp onto it,” Wagley said in the “Infinite History” interview.

Her interest in education also led her to teaching positions at Johns Hopkins University and Goucher College. Then, in 1966, she became head of St. Paul’s School for Girls, experimenting with a variety of new math courses and helping the school develop a reputation for strong math and science preparation.

In 1970, around the time that MIT started making some of the boys’ dorms co-ed, Wagley became the first female member of the MIT Corporation. Once referring to the group as “formidable,” she nonetheless managed to serve on visiting committees ranging from chemistry to biology, philosophy, libraries, nuclear engineering, psychology, sponsored research, and the humanities. She also participated on the search committees that selected two of MIT presidents, Paul Gray and Charles Vest.

“I’ve tried to do a good job,” said Wagley, “thinking that that paved the way for women who came after me.”

It was MIT’s athletics, which had been completely unavailable to her as one of the first women at the Institute, that became a particular focus for Wagley as a Corporation member. She was instrumental in getting the Zesiger Sports and Fitness Center built, and she later established a Mary Frances Wagley Fund, an endowment supporting the head coach position for varsity men and women’s swimming and diving.

Wagley was working as the executive director of Episcopal Social Ministries of the Diocese of Maryland, where she ran a food bank and a homeless shelter, when she became the president of the MIT Alumni Association in 1984. Again, it was a first for women.

“As the first female president, obviously my topic was women at MIT,” Wagley told “Infinite History.” 

She became a life member of the MIT Corporation in 1988 and a life member emerita in 2002.

“My mom loved MIT, she loved her time there,” says Jay Wagley. “MIT was a good fit for her, and since she loved it there, she wanted to give back to the Institute.”

Wagley’s husband, physician Philip Franklin Wagley, died in 2000. She is survived by her three children — Anne Paxton Wagley of Berkeley, California; Mary Frances Kemper Wagley Copp of Providence, Rhode Island; and James “Jay” Franklin Penney Wagley of Dallas — as well as seven grandchildren and two great-grandchildren.

Donations in Wagley’s memory can be made to St. Paul’s School for Girls in Brooklandville, Maryland, or Immanuel Episcopal Church in Sparks Glencoe, Maryland. The Wagley family plans to hold a virtual service in the coming weeks. For information on the memorial service, please email mfpwservice@gmail.com.



from MIT News https://ift.tt/3ltboTI
via Gabe's Musing's

System brings deep learning to “internet of things” devices

Deep learning is everywhere. This branch of artificial intelligence curates your social media and serves your Google search results. Soon, deep learning could also check your vitals or set your thermostat. MIT researchers have developed a system that could bring deep learning neural networks to new — and much smaller — places, like the tiny computer chips in wearable medical devices, household appliances, and the 250 billion other objects that constitute the “internet of things” (IoT).

The system, called MCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security.

The research will be presented at next month’s Conference on Neural Information Processing Systems. The lead author is Ji Lin, a PhD student in Song Han’s lab in MIT’s Department of Electrical Engineering and Computer Science. Co-authors include Han and Yujun Lin of MIT, Wei-Ming Chen of MIT and National University Taiwan, and John Cohn and Chuang Gan of the MIT-IBM Watson AI Lab.

The Internet of Things

The IoT was born in the early 1980s. Grad students at Carnegie Mellon University, including Mike Kazar ’78, connected a Cola-Cola machine to the internet. The group’s motivation was simple: laziness. They wanted to use their computers to confirm the machine was stocked before trekking from their office to make a purchase. It was the world’s first internet-connected appliance. “This was pretty much treated as the punchline of a joke,” says Kazar, now a Microsoft engineer. “No one expected billions of devices on the internet.”

Since that Coke machine, everyday objects have become increasingly networked into the growing IoT. That includes everything from wearable heart monitors to smart fridges that tell you when you’re low on milk. IoT devices often run on microcontrollers — simple computer chips with no operating system, minimal processing power, and less than one thousandth of the memory of a typical smartphone. So pattern-recognition tasks like deep learning are difficult to run locally on IoT devices. For complex analysis, IoT-collected data is often sent to the cloud, making it vulnerable to hacking.

“How do we deploy neural nets directly on these tiny devices? It’s a new research area that’s getting very hot,” says Han. “Companies like Google and ARM are all working in this direction.” Han is too.

With MCUNet, Han’s group codesigned two components needed for “tiny deep learning” — the operation of neural networks on microcontrollers. One component is TinyEngine, an inference engine that directs resource management, akin to an operating system. TinyEngine is optimized to run a particular neural network structure, which is selected by MCUNet’s other component: TinyNAS, a neural architecture search algorithm.

System-algorithm codesign

Designing a deep network for microcontrollers isn’t easy. Existing neural architecture search techniques start with a big pool of possible network structures based on a predefined template, then they gradually find the one with high accuracy and low cost. While the method works, it’s not the most efficient. “It can work pretty well for GPUs or smartphones,” says Lin. “But it’s been difficult to directly apply these techniques to tiny microcontrollers, because they are too small.”

So Lin developed TinyNAS, a neural architecture search method that creates custom-sized networks. “We have a lot of microcontrollers that come with different power capacities and different memory sizes,” says Lin. “So we developed the algorithm [TinyNAS] to optimize the search space for different microcontrollers.” The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller — with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin.

To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight — instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine.

The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.” In the group’s tests of TinyEngine, the size of the compiled binary code was between 1.9 and five times smaller than comparable microcontroller inference engines from Google and ARM. TinyEngine also contains innovations that reduce runtime, including in-place depth-wise convolution, which cuts peak memory usage nearly in half. After codesigning TinyNAS and TinyEngine, Han’s team put MCUNet to the test.

MCUNet’s first challenge was image classification. The researchers used the ImageNet database to train the system with labeled images, then to test its ability to classify novel ones. On a commercial microcontroller they tested, MCUNet successfully classified 70.7 percent of the novel images — the previous state-of-the-art neural network and inference engine combo was just 54 percent accurate. “Even a 1 percent improvement is considered significant,” says Lin. “So this is a giant leap for microcontroller settings.”

The team found similar results in ImageNet tests of three other microcontrollers. And on both speed and accuracy, MCUNet beat the competition for audio and visual “wake-word” tasks, where a user initiates an interaction with a computer using vocal cues (think: “Hey, Siri”) or simply by entering a room. The experiments highlight MCUNet’s adaptability to numerous applications.

“Huge potential”

The promising test results give Han hope that it will become the new industry standard for microcontrollers. “It has huge potential,” he says.

The advance “extends the frontier of deep neural network design even farther into the computational domain of small energy-efficient microcontrollers,” says Kurt Keutzer, a computer scientist at the University of California at Berkeley, who was not involved in the work. He adds that MCUNet could “bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.”

MCUNet could also make IoT devices more secure. “A key advantage is preserving privacy,” says Han. “You don’t need to transmit the data to the cloud.”

Analyzing data locally reduces the risk of personal information being stolen — including personal health data. Han envisions smart watches with MCUNet that don’t just sense users’ heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information. MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access.

Plus, MCUNet’s slim computing footprint translates into a slim carbon footprint. “Our big dream is for green AI,” says Han, adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy. “Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data,” says Han.



from MIT News https://ift.tt/2IAjSKi
via Gabe's Musing's

Sunday, November 8, 2020

Aspiring physician explores the many levels of human health

It was her childhood peanut allergy that first sparked senior Ayesha Ng’s fascination with the human body. “To see this severe reaction happen to my body and not know what was happening — that made me a lot more curious about biology and living systems,” Ng says.

She didn’t exactly plan it this way. But in her three and a half years at MIT, Ng, a biology and cognitive and brain sciences double major from the Los Angeles, California area, has conducted research and taken classes examining just about every level of human health — from cellular to societal.

Most recently, her passion for medicine and health equity led her to the National Foundation for the Centers for Disease Control and Prevention (CDC), where, this summer, she worked to develop guidelines for addressing health disparities on state and local health jurisdictions’ Covid-19 data dashboards. Now, as an aspiring physician amidst the medical school application process, Ng has a sense of how microbiological, physiological, and social systems interact to affect a person’s health.

Starting small

Throughout her entire first year at MIT, Ng studied the biology of health at a cellular level. Specifically, she researched the effects of fasting and aging on regeneration of intestinal stem cells, which are located in the human intestinal crypts and continuously self-divide and reproduce. Understanding these metabolic mechanisms is crucial, as their deregulation can lead to age-associated diseases such as cancer.

“That experience allowed me to broaden my technical skills, just getting used to so many different types of molecular biological techniques right away, which I really appreciated,” Ng says of her time at the Whitehead Institute for Biomedical Research in Professor David Sabatini’s lab.

“After some time, I realized that I also wanted to also study sciences at a broader, more macro level, instead of only the microbiology and molecular biology that we were studying in Course 7,” Ng says of her biology major.

In addition to studying the biology of cancer, Ng had developed a curiosity about the human brain and how it functions. “I was really interested in that, because my grandpa has dementia,” Ng says.

Seeing her grandfather’s cognitive decline, she was inspired to become involved in MIT BrainTrust, a student organization that offers a social support network for individuals from around the Boston, Massachusetts area who have brain injuries. “We have these meetings, in which I serve as one of only one or two students there to facilitate a safe space where we can have all these individuals with brain injury gather,” Ng says of the peer-support aspect of the program. “They can really share their mutual challenges and experiences.”

Investigating the brain

To pursue her interest in brain research and the societal impact of brain injuries, Ng traveled to the University of Hong Kong the summer after her first year as an MIT International Science and Technology Initiatives (MISTI) China Fung Scholar. Working with Professor Raymond Chang, she began to examine neurodegenerative disease and used tissue-clearing techniques to visualize 3D mouse brain structures at cellular resolution. “That was personally meaningful for me, to research about that and learn more about dementia,” Ng says.

Returning to MIT her sophomore year, Ng was certain that she wanted to continue studying the brain. She began working on Alzheimer’s research at the MIT Picower Institute for Learning and Memory in the lab of Professor Li-Huei Tsai, the Picower Professor of Neuroscience at MIT. Much existing research into Alzheimer’s disease has been at the bulk-tissue level, focusing on the neurons’ role in neurodegeneration associated with aging.

Ng’s work with Tsai considers the complexity of alterations across genes and less-abundant cell types, such as microglia, astrocytes, and other supporting glial cells that become dysregulated in the brains of patients with Alzheimer’s. Considering the interplay between and within cell types during neurodegeneration is most intriguing to her. While some molecular processes are protective, other damaging ones simultaneously occur and can exist even within the same cell type. This intricacy has made the mechanistic basis behind Alzheimer’s progression elusive and the research that much more crucial.

“It’s really interesting to see how heterogeneous and complex the responses are in Alzheimer's brains,” Ng says of the research program with Tsai, a founding director of MIT’s Aging Brain Initiative. “I really think about these potential new drug targets to improve treatment for Alzheimer's in the future because I have seen, with my grandpa especially, how treatment is really lacking in the neurodegeneration field. There’s no treatment that's been able to stop or even slow the progression of Alzheimer's disease.”

Her research project in the Tsai Lab relies on a technology called single-nucleus RNA sequencing (snRNA-seq), which extracts the genomic information contained in individual cells. This is followed by computational dimension reduction and clustering algorithms to examine how Alzheimer’s disease differentially affects genes and specific cell types.

“With that project, we've been able to use single-nucleus RNA sequencing to really look at the brains of human Alzheimer's patients,” Ng says. “And with the single-cell technology, we're able to look at brain tissue at a much higher resolution, allowing us to see that there’s so much heterogeneity within the brain.”

After conducting more than a year of Alzheimer’s research and then taking a human physiology class in her third year, Ng decided to add a second major in brain and cognitive sciences to gain deeper insight specifically into how the nervous system within the body functions.

“That class really allowed me to realize that I really love organ systems and wanted to study by looking at more physiological mechanisms,” Ng says. “It has been really great to, at the end of my college career, really delve more into a very specific system.”

Medicine and society

Having gained perspective on cellular and microbiology, and human organ systems, Ng decided to zoom out further, interning this past summer at the National Foundation for the CDC. She found the opportunity through MIT’s PKG Center, applied as one of 60 candidates, and was selected for a team of four. There, as a member of the CDC Foundation’s Health Equity Strike Team, she examined how to increase transparency of publicly available Covid-19 data on health disparities and how the narrative tied to health equity can be modified in public health messages. This involved harnessing data about the demographics of those most affected during Covid-19 — including how infection and mortality rates differ starkly based on social factors including housing conditions, socioeconomic status, race, and ethnicity.

“Thinking about all these factors, we compiled a set of best practices for how to present data about Covid-19, what data should be collected, and tried to push those out to help jurisdictions as best-practice recommendations,” Ng says. “That did really increase my interest in health equity and made me realize how important public health is as well.”

Amidst the Covid-19 pandemic, Ng is spending the first semester of her senior year at home with her family in the Los Angeles area. “I really miss the people and not being able to interact with not only other students and peers, but also faculty as well,” she says. “I really wanted to enjoy time with friends, and just explore more of MIT, too, which I didn't always get the chance to do over the past few years.”

Still, she continues to participate in both BrainTrust and MIT’s Asian Dance team, remotely, through weekly practices on Zoom.

“I think dance is one of the biggest de-stressors for me; I had never done dance before going to college. Getting to meet this team and join this community allowed me not only to connect to my Asian cultural roots, but also just expose myself to this new art form where I could really learn how to express myself on stage,” Ng says. “And that really has been the source of relief for me to just liberate any worries that I have, and has increased my sense of self-awareness and self-confidence.”

Armed with the many experiences she has enjoyed at MIT, both in and out of the classroom, Ng plans to continue studying both medicine and public health. She’s excited to explore different potential specialties and is currently most intrigued by surgery. Whichever specialty she may choose, she is determined to include health equity and cultural sensitivity in her practice.

“Seeing surgeons, I personally think that being able to physically heal a patient with my own hands, that would be the most rewarding feeling,” Ng says. “I will strive to, as a physician, use whatever platform that I have to advocate for patients and really drive health-care systems to overcome disparities.”



from MIT News https://ift.tt/3eDCOn4
via Gabe's Musing's

Thursday, November 5, 2020

A storyteller dedicated to environmental justice

“What’s an important part of your identity?”

It was a simple question. Yet Mimi Wahid watched as the high school students in her workshop fell silent, their eyebrows furrowed in thought. It was clear that for many, this was the first time they had been directly asked this question before.

To Wahid, an MIT senior, questions about identity define her story. Growing up as a young woman in rural North Carolina to a white mother and a black father, Wahid often found herself thinking about race.

She soon learned she wasn’t alone. As a high school student, she was invited to attend the NAIS Student Diversity Leadership Conference, a national event that brings together 1,700 students from different backgrounds across the country. Wahid credits the experience as her first exposure to the importance of identity development.

Today, Wahid is a faculty member for the conference, helping to build and facilitate the annual events, and she has led other identity workshops at MIT, including through MIT’s Office of Engineering Outreach Programs (OEOP). In this work, she aims to help students better understand how social environment, experiences, and self-definition work together to shape a person’s identity, and how exploring categories such as race, gender, and socioeconomic status can help people understand and discuss their own identities.

“I’m constantly viewing things through a lens that’s been shaped by my identity,” says Wahid, who is double-majoring in urban studies and planning and in writing. “My racial identity taught me a lot growing up about the contrast of power and privilege. It affects the way I see the world. It affects what I care about most.”

Wahid cites her identity as the source of her passion for environmental justice. Both her parents are landscapers, and from a young age, she often accompanied them on jobs around town. It was through these experiences that she developed a love for nature and an understanding of the different economic divides in her community.

“I would read in the newspaper about how only certain neighborhoods were affected by harmful water quality in my town. It was clear that it was happening more in poor communities,” says Wahid.

The fact that residents of these communities often were people of color didn’t escape Wahid’s eye. Her father had raised her on stories about his childhood that were entangled with historical social movements. Born in 1950 and raised in South Carolina, Wahid’s father was familiar with the impact of segregation and wanted his children to be educated about their surroundings.

“When you’re in North Carolina, the remnants of the Civil Rights movement are everywhere,” she says. “But the effects of segregation are still there today. People of color are disproportionately exposed to the worst parts of the environment.”

Wahid came to MIT knowing that environmental justice was the issue she wanted to tackle. In her first year, she enrolled in the graduate-level course 11.401 (Introduction to Community Housing and Economic Development). While she lacked the working experience of her older peers, Wahid still found ways to excel. “I could draw evidence from our readings and connect it to what I’d seen in my past. The class helped me value the importance of what I had grown up observing,” she says.

Wahid also credits the class for giving her a new perspective on ways to solve problems. Her original plan was to pursue a degree in environmental engineering, but over time, her interests in reading and writing grew. Inspired by students in her class that were urban planners and community organizers, Wahid realized that she could instead use her words to fight for environmental justice.

She continued to take advanced courses and eventually accumulated enough credits to pursue a double major in urban studies and writing. While the number of students majoring in MIT’s Comparative Media Studies/Writing program is relatively small, many students from other majors often join the classes and write about their unique scientific interests. For Wahid, this meant taking the opportunity to share her longstanding appreciation for trees, among other topics. Her early childhood knowledge of tree identification was brought back to light in a whole new way.

In a science writing class, she wrote a short story about the importance of urban forestry in cities most at risk from climate change. Her work earned her the DeWitt Wallace Prize for Science Writing for the Public.

“Science writing was something that helped me bridge what I was learning in school and explain it to my friends and family back home,” she says.

Wahid also channeled her talent for communicating environmental science through her work at the Center for Coalfield Justice. Based out of a rural coal-mining town in Pennsylvania, the nonprofit organization works with individuals who are most directly affected by resource extraction.

As an intern and PKG Fellow, Wahid helped write resource documents that informed community members about their rights in relation to coal companies and how changes in the fossil fuel industry would affect their futures. The information was eventually compiled into a public workshop.

“The town that I was working in was, in some ways, similar to my hometown,” says Wahid. Although she was hundreds of miles away, Wahid’s memories helped her to create questions for the workshop that she knew would be on the minds of community members. Her work opened important discussions and gave locals a chance to discuss new opportunities.

In addition to her work in Pennsylvania, Wahid has also participated in a variety of other efforts on campus focused on social justice. She currently serves as a program facilitator for OEOP’s outreach program MOSTEC, whose mission is to build a diverse and supportive community for high school students interested in STEM fields. During her time at MIT, she has also volunteered as a coordinator and counselor for the PKG Center’s first-year pre-orientation program focused on social justice, and has served in roles with the SPXCE intercultural center, the MIT Admissions Multicultural Recruitment Team, and the Black Students Union, among other projects.

As she continued to interlace her knowledge of science with stories from her own life, Wahid found a passion for writing memoirs, including one that received MIT’s Isabelle de Courtivron Prize for 2020. She is currently working on a writing thesis that is a reworked collection of her previous essays, many of which focus on the theme of intersectionality.

One piece expands on the concept of human migration through the journeys in her own family’s history. She is connecting stories from her parents and grandparents to the histories of the places they’ve called home.

“I’m doing research on what was happening at those moments to paint their fuller history. I’m really curious about all the ways my different family stories intersect with larger narratives,” Wahid explains.

Through telling these stories, Wahid believes that she can better serve as an advocate for environmental justice. She credits her science writing course for teaching her that the challenge of communicating science can be improved through framing it with personal experiences.

“If I can share my own story, I can relate with people through that,” she says. “We all have experiences that shaped us — that made us who we are. When we create these connections, then we can finally start to have really impactful conversations.”



from MIT News https://ift.tt/2TXYUHv
via Gabe's Musing's

Wednesday, November 4, 2020

Using machine learning to track the pandemic’s impact on mental health

Dealing with a global pandemic has taken a toll on the mental health of millions of people. A team of MIT and Harvard University researchers has shown that they can measure those effects by analyzing the language that people use to express their anxiety online.

Using machine learning to analyze the text of more than 800,000 Reddit posts, the researchers were able to identify changes in the tone and content of language that people used as the first wave of the Covid-19 pandemic progressed, from January to April of 2020. Their analysis revealed several key changes in conversations about mental health, including an overall increase in discussion about anxiety and suicide.

“We found that there were these natural clusters that emerged related to suicidality and loneliness, and the amount of posts in these clusters more than doubled during the pandemic as compared to the same months of the preceding year, which is a grave concern,” says Daniel Low, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT and the lead author of the study.

The analysis also revealed varying impacts on people who already suffer from different types of mental illness. The findings could help psychiatrists, or potentially moderators of the Reddit forums that were studied, to better identify and help people whose mental health is suffering, the researchers say.

“When the mental health needs of so many in our society are inadequately met, even at baseline, we wanted to bring attention to the ways that many people are suffering during this time, in order to amplify and inform the allocation of resources to support them,” says Laurie Rumker, a graduate student in the Bioinformatics and Integrative Genomics PhD Program at Harvard and one of the authors of the study.

Satrajit Ghosh, a principal research scientist at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the Journal of Internet Medical Research. Other authors of the paper include Tanya Talkar, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT; John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center; and Guillermo Cecchi, a principal research staff member at the IBM Thomas J. Watson Research Center.

A wave of anxiety

The new study grew out of the MIT class 6.897/HST.956 (Machine Learning for Healthcare), in MIT’s Department of Electrical Engineering and Computer Science. Low, Rumker, and Talkar, who were all taking the course last spring, had done some previous research on using machine learning to detect mental health disorders based on how people speak and what they say. After the Covid-19 pandemic began, they decided to focus their class project on analyzing Reddit forums devoted to different types of mental illness.

“When Covid hit, we were all curious whether it was affecting certain communities more than others,” Low says. “Reddit gives us the opportunity to look at all these subreddits that are specialized support groups. It’s a really unique opportunity to see how these different communities were affected differently as the wave was happening, in real-time.”

The researchers analyzed posts from 15 subreddit groups devoted to a variety of mental illnesses, including schizophrenia, depression, and bipolar disorder. They also included a handful of groups devoted to topics not specifically related to mental health, such as personal finance, fitness, and parenting.

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each group’s posts after the onset of the pandemic, as well as distinctive differences between groups.

The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment.

They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders. The researchers hypothesize that without their usual social support systems in place, due to lockdowns, people suffering from those disorders found it much more difficult to manage their conditions. In those groups, the researchers found posts about hyperfocusing on the news and relapsing back into anorexia-type behaviors since meals were not being monitored by others due to quarantine.

Using another algorithm, the researchers grouped posts into clusters such as loneliness or substance use, and then tracked how those groups changed as the pandemic progressed. Posts related to suicide more than doubled from pre-pandemic levels, and the groups that became significantly associated with the suicidality cluster during the pandemic were the support groups for borderline personality disorder and post-traumatic stress disorder.

The researchers also found the introduction of new topics specifically seeking mental health help or social interaction. “The topics within these subreddit support groups were shifting a bit, as people were trying to adapt to a new life and focus on how they can go about getting more help if needed,” Talkar says.

While the authors emphasize that they cannot implicate the pandemic as the sole cause of the observed linguistic changes, they note that there was much more significant change during the period from January to April in 2020 than in the same months in 2019 and 2018, indicating the changes cannot be explained by normal annual trends.

Mental health resources

This type of analysis could help mental health care providers identify segments of the population that are most vulnerable to declines in mental health caused by not only the Covid-19 pandemic but other mental health stressors such as controversial elections or natural disasters, the researchers say.

Additionally, if applied to Reddit or other social media posts in real-time, this analysis could be used to offer users additional resources, such as guidance to a different support group, information on how to find mental health treatment, or the number for a suicide hotline.

“Reddit is a very valuable source of support for a lot of people who are suffering from mental health challenges, many of whom may not have formal access to other kinds of mental health support, so there are implications of this work for ways that support within Reddit could be provided,” Rumker says.

The researchers now plan to apply this approach to study whether posts on Reddit and other social media sites can be used to detect mental health disorders. One current project involves screening posts in a social media site for veterans for suicide risk and post-traumatic stress disorder.

The research was funded by the National Institutes of Health and the McGovern Institute.



from MIT News https://ift.tt/3646gih
via Gabe's Musing's

Saturday, October 31, 2020

A deep look at how financial markets are designed

Financial markets are fast-moving, complex, and opaque. Even the U.S. stock market is fragmented into an array of competing exchanges and a set of proprietary “dark pools” run by financial firms. Meanwhile, high-frequency traders zoom around buying and selling stocks at speeds other investors cannot match.

Yet stocks represent a relatively transparent investment compared to many types of bonds, derivatives, and commodities. So when the financial sector melted down in 2007-08, it led to a wave of reforms as regulators sought to rationalize markets.

But every financial market, reformed or not, has its quirks, making them all ripe for scholars to scrutinize. That’s what Haoxiang Zhu does. The Gordon Y. Billard Professor of Management and Finance at the MIT Sloan School of Management is an expert on how market design and structure influence asset prices and investors. Over the last decade, his detailed theoretical and empirical studies have illuminated market behavior and gained an audience — scholars, traders, and policymakers — interested in how markets can be structured.

“When we need to reform markets, what should we do?” asks Zhu. “To the extent that something is not done perfectly, how can we refine it? These are very concrete problems and I want my research to shed light directly on them.”

One award-winning paper Zhu co-wrote in 2017 shows how transparent, reliable benchmark prices help investors efficiently identify acceptable costs and dealers in many large markets. For instance, in 2012, LIBOR, the interest-rate benchmark applied to hundreds of trillions of dollars in derivatives, was shown to have had price-manipulation problems. Zhu’s work emphasizes the value of having robust benchmarks (as post-2012 reforms have attempted to address) rather than scrapping them altogether.

Another recent Zhu paper, published this past September, looks at the way the Dodd-Frank banking legislation of 2010 has changed the trading of some credit default swaps in the U.S. — by using centralized mechanisms to connect investors and dealers, instead of the one-on-one “over-the-counter” market. The new design has been working well, the paper finds, but still has room to improve; investors still have no easy ways to trade among themselves without dealer intermediation. Additional market-design changes could address these issues.

Many of Zhu’s results are nuanced: One 2014 paper he wrote about the stock market suggests that privately-run dark pools may unexpectedly help price discovery by siphoning off lower-information traders, while better-informed traders help determine prices on the bigger exchanges. And a 2017 study he co-authored about the optimal trading frequency of stocks finds that when it comes to setting new prices, smaller-cap companies should likely be traded less frequently than bigger firms. Such findings suggest subtle ways to think about structuring stock-markets — and indeed Zhu maintains ongoing dialogues with policy experts.

“I think this sort of analysis does inform policymaking,” Zhu says. “It’s not easy to do evidence-based rulemaking. It’s costly to discover evidence, it takes time.”

Solving one problem at a time

Zhu did not fully develop his interest in finance and markets until after his college days. As an undergraduate at Oxford University, he studied mathematics and computer science, graduating in 2006. Then Zhu got a job for a year at Lehman Brothers, the once-flourishing investment bank. He departed in 2007, a year before Lehman imploded; it had become overleveraged, borrowing massively to fund an array of bad bets.

“Fortunately, I left early,” says Zhu. Still, his short time working in finance revealed a couple of important things to him. Zhu found the daily routine of finance to be “very repetitive.” But he also became convinced there were compelling problems to be addressed in the area of market structures.

“I think part of my interest in the details of market design has to do with my industry experience,” Zhu says. “I came into finance and economics viewing it somewhat from the outside. I looked at it more as an engineer would. That’s why I think MIT’s a perfect fit, because of the engineering way of looking at things. We solve one problem at a time.”

Which is also to say that Zhu’s research is not necessarily intended to produce overarching conclusions about the nature of all markets; he investigates the mechanics of separate markets first and foremost.

“It’s hard to get very deep if you start too broad,” says Zhu, who earned tenure at MIT last year. “I would argue we should start with depth. Once you get to the bottom of something, you see there are connections between many different issues.”

Zhu received his PhD in 2012 from Stanford University’s Graduate School of Business, and joined the MIT faculty that same year. Along with his appointment in Sloan, Zhu is a faculty affiliate in the MIT Laboratory for Financial Engineering and the MIT Golub Center for Finance and Policy.

Among the honors Zhu has received, his research papers have won several awards. The paper on benchmarks, for one, was granted the Amundi Smith Breeden First Prize by the Journal of Finance; the paper on optimal trading frequency won the Kepos Capital Award for Best Paper on Investments, from the Western Finance Association; and Zhu’s dark pools paper won the Morgan Stanley Prize for Excellence in Financial Markets.

Like a start-up

Much of Zhu’s time and energy is also devoted to teaching, and he is quick to praise the students he works with at MIT Sloan.

“They are smart, they are hard-working,” Zhu says. Of his PhD students, he adds, “It is always a challenge to go from being a good student getting good grades to producing research. Producing research is almost like starting up a company. It’s not easy. We do our best to help them, and I enjoy interacting with them.”

And while continuing to study financial market design, Zhu is expanding his research portfolio. Among other projects, he is currently looking at the impact of new payment systems on the traditional banking industry.

“I think that’s really a fantastic area for research.” Zhu says. “Once you have a [new] payment system, people’s payments get diverted away from the banks. … So we basically look at how financial technology, in this case payment providers, siphons off customers and information away from banks, and how banks will cope.”

At the same time, Zhu’s work on market structures continues to have an audience in the finance industry and among its regulators, both of which he welcomes. Indeed, Zhu has written several comment letters to regulators about proposed rules that could have material impact on the market. For example, he has argued against certain proposals that would reduce the transparency of the corporate bond market, the swaps market, and investment managers’ portfolio holdings. But he is in favor of the U.S. Treasury’s innovation in issuing debt linked to the new U.S. benchmark interest rate that is set to replace LIBOR.

“In market design the message is often nuanced: There are advantages, there are disadvantages,” Zhu says. “But figuring out the tradeoff is what I find very rewarding, in doing this kind of work.”



from MIT News https://ift.tt/3mEnGsv
via Gabe's Musing's

Wednesday, October 28, 2020

Artificial intelligence model detects asymptomatic Covid-19 infections through cellphone-recorded coughs

Asymptomatic people who are infected with Covid-19 exhibit, by definition, no discernible physical symptoms of the disease. They are thus less likely to seek out testing for the virus, and could unknowingly spread the infection to others.

But it seems those who are asymptomatic may not be entirely free of changes wrought by the virus. MIT researchers have now found that people who are asymptomatic may differ from healthy individuals in the way that they cough. These differences are not decipherable to the human ear. But it turns out that they can be picked up by artificial intelligence.

In a paper published recently in the IEEE Journal of Engineering in Medicine and Biology, the team reports on an AI model that distinguishes asymptomatic people from healthy individuals through forced-cough recordings, which people voluntarily submitted through web browsers and devices such as cellphones and laptops.

The researchers trained the model on tens of thousands of samples of coughs, as well as spoken words. When they fed the model new cough recordings, it accurately identified 98.5 percent of coughs from people who were confirmed to have Covid-19, including 100 percent of coughs from asymptomatics — who reported they did not have symptoms but had tested positive for the virus.

The team is working on incorporating the model into a user-friendly app, which if FDA-approved and adopted on a large scale could potentially be a free, convenient, noninvasive prescreening tool to identify people who are likely to be asymptomatic for Covid-19. A user could log in daily, cough into their phone, and instantly get information on whether they might be infected and therefore should confirm with a formal test.

“The effective implementation of this group diagnostic tool could diminish the spread of the pandemic if everyone uses it before going to a classroom, a factory, or a restaurant,” says co-author Brian Subirana, a research scientist in MIT’s Auto-ID Laboratory.

Subirana’s co-authors are Jordi Laguarta and Ferran Hueto, of MIT’s Auto-ID Laboratory.

Vocal sentiments

Prior to the pandemic’s onset, research groups already had been training algorithms on cellphone recordings of coughs to accurately diagnose conditions such as pneumonia and asthma. In similar fashion, the MIT team was developing AI models to analyze forced-cough recordings to see if they could detect signs of Alzheimer’s, a disease associated with not only memory decline but also neuromuscular degradation such as weakened vocal cords.

They first trained a general machine-learning algorithm, or neural network, known as ResNet50, to discriminate sounds associated with different degrees of vocal cord strength. Studies have shown that the quality of the sound “mmmm” can be an indication of how weak or strong a person’s vocal cords are. Subirana trained the neural network on an audiobook dataset with more than 1,000 hours of speech, to pick out the word “them” from other words like “the” and “then.”

The team trained a second neural network to distinguish emotional states evident in speech, because Alzheimer’s patients — and people with neurological decline more generally — have been shown to display certain sentiments such as frustration, or having a flat affect, more frequently than they express happiness or calm. The researchers developed a sentiment speech classifier model by training it on a large dataset of actors intonating emotional states, such as neutral, calm, happy, and sad.

The researchers then trained a third neural network on a database of coughs in order to discern changes in lung and respiratory performance.

Finally, the team combined all three models, and overlaid an algorithm to detect muscular degradation. The algorithm does so by essentially simulating an audio mask, or layer of noise, and distinguishing strong coughs — those that can be heard over the noise — over weaker ones.

With their new AI framework, the team fed in audio recordings, including of Alzheimer’s patients, and found it could identify the Alzheimer’s samples better than existing models. The results showed that, together, vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation were effective biomarkers for diagnosing the disease.

When the coronavirus pandemic began to unfold, Subirana wondered whether their AI framework for Alzheimer’s might also work for diagnosing Covid-19, as there was growing evidence that infected patients experienced some similar neurological symptoms such as temporary neuromuscular impairment.

“The sounds of talking and coughing are both influenced by the vocal cords and surrounding organs. This means that when you talk, part of your talking is like coughing, and vice versa. It also means that things we easily derive from fluent speech, AI can pick up simply from coughs, including things like the person’s gender, mother tongue, or even emotional state. There’s in fact sentiment embedded in how you cough,” Subirana says. “So we thought, why don’t we try these Alzheimer’s biomarkers [to see if they’re relevant] for Covid.”

“A striking similarity”

In April, the team set out to collect as many recordings of coughs as they could, including those from Covid-19 patients. They established a website where people can record a series of coughs, through a cellphone or other web-enabled device. Participants also fill out a survey of symptoms they are experiencing, whether or not they have Covid-19, and whether they were diagnosed through an official test, by a doctor’s assessment of their symptoms, or if they self-diagnosed. They also can note their gender, geographical location, and native language.

To date, the researchers have collected more than 70,000 recordings, each containing several coughs, amounting to some 200,000 forced-cough audio samples, which Subirana says is “the largest research cough dataset that we know of.” Around 2,500 recordings were submitted by people who were confirmed to have Covid-19, including those who were asymptomatic.

The team used the 2,500 Covid-associated recordings, along with 2,500 more recordings that they randomly selected from the collection to balance the dataset. They used 4,000 of these samples to train the AI model. The remaining 1,000 recordings were then fed into the model to see if it could accurately discern coughs from Covid patients versus healthy individuals.

Surprisingly, as the researchers write in their paper, their efforts have revealed “a striking similarity between Alzheimer’s and Covid discrimination.”

Without much tweaking within the AI framework originally meant for Alzheimer’s, they found it was able to pick up patterns in the four biomarkers — vocal cord strength, sentiment, lung and respiratory performance, and muscular degradation — that are specific to Covid-19. The model identified 98.5 percent of coughs from people confirmed with Covid-19, and of those, it accurately detected all of the asymptomatic coughs.

“We think this shows that the way you produce sound, changes when you have Covid, even if you’re asymptomatic,” Subirana says.

Asymptomatic symptoms

The AI model, Subirana stresses, is not meant to diagnose symptomatic people, as far as whether their symptoms are due to Covid-19 or other conditions like flu or asthma. The tool’s strength lies in its ability to discern asymptomatic coughs from healthy coughs.  

The team is working with a company to develop a free pre-screening app based on their AI model. They are also partnerning with several hospitals around the world to collect a larger, more diverse set of cough recordings, which will help to train and strengthen the model’s accuracy.

As they propose in their paper, “Pandemics could be a thing of the past if pre-screening tools are always on in the background and constantly improved.”

Ultimately, they envision that audio AI models like the one they’ve developed may be incorporated into smart speakers and other listening devices so that people can conveniently get an initial assessment of their disease risk, perhaps on a daily basis.

This research was supported, in part, by Takeda Pharmaceutical Company Limited.



from MIT News https://ift.tt/3mu306o
via Gabe's Musing's

Monday, October 26, 2020

Tyler Jacks, founding director of MIT’s Koch Institute, to step down

The Koch Institute for Integrative Cancer Research at MIT, a National Cancer Institute (NCI)-designated cancer center, has announced that Tyler Jacks will step down from his role as director, pending selection of his successor.

“An exceptionally creative scientist and a leader of great vision, Tyler also has a rare gift for launching and managing large, complex organizations, attracting exceptional talent and inspiring philanthropic support,” says MIT President L. Rafael Reif. “We are profoundly grateful for all the ways he has served MIT, including most recently his leadership on the Research Ramp Up Lightning Committee, which made it possible for MIT's research enterprise to resume in safe ways after the initial Covid shutdown. I offer warmest admiration and best wishes as Tyler steps down from leading the Koch and returns full time to the excitement of the lab.”

Jacks, the David H. Koch Professor of Biology, has served as director for more than 19 years, first for the MIT Center for Cancer Research (CCR) and then for its successor, the Koch Institute. The CCR was founded by Nobel laureate Salvador Luria in 1974, shortly after the federal government declared “war on cancer,” with the mission of unravelling the molecular core of cancer. Jacks became the center’s fourth director in 2001, following Luria, Nobel laureate and Institute Professor Phillip Sharp, and Daniel K. Ludwig Professor for Cancer Research Richard Hynes.

Aided by the championship of then-MIT President Susan Hockfield and a gift of $100 million from MIT alumnus David H. Koch ’62, SM ’63, Jacks oversaw the evolution of the Center for Cancer Research into the Koch Institute in 2007 as well as the construction of a new home in Building 76, completed in 2010. The Koch Institute expands the mission of its predecessor by bringing life scientists and engineers together to advance understanding of the basic biology of cancer, and to develop new tools to better diagnose, monitor, and treat the disease.

Under the direction of Jacks, the institute has become an engine of collaborative cancer research at MIT. “Tyler’s vision and execution of a convergent cancer research program has propelled the Koch Institute to the forefront of discovery,” notes Maria Zuber, MIT’s vice president for research.

Bolstered by the Koch Institute’s associate directors Jacqueline Lees, Matthew Vander Heiden, Darrell Irvine, and Dane Wittrup, Jacks oversaw four successful renewals of the coveted NCI-designated cancer center stature, with the last two renewals garnering perfect scores. In 2015, Jacks was the recipient of the James R. Killian Jr. Faculty Achievement Award, the highest honor the MIT faculty can bestow upon one of its members, for his leadership in cancer research and for his role in establishing the Koch Institute.

“Tyler Jacks turned the compelling idea to accelerate progress against cancer by bringing together fundamental biology, engineering know-how, and clinical expertise, into the intensively collaborative environment that is now the Koch Institute for Integrative Cancer Research,” says Hockfield. “His extraordinary leadership has amplified the original idea into a paradigm-changing approach to cancer, which now serves as a model for research centers around the world.”

To support cross-disciplinary research in high-impact areas and expedite translation from the bench to the clinic, Jacks and his colleagues shepherded the creation of numerous centers and programs, among them the Ludwig Center for Molecular Oncology, the Marble Center for Cancer Nanomedicine, the MIT Center for Precision Cancer Medicine, the Swanson Biotechnology Center, the Lustgarten Lab for Pancreatic Cancer Research, and the MIT Stem Cell Initiative. In addition, Jacks has co-led the Bridge Project, a collaboration between the Koch Institute and Dana-Farber/Harvard Cancer Center that brings bioengineers, cancer scientists, and clinical oncologists together to solve some of the most challenging problems in cancer research. Jacks has raised nearly $375 million in support of these efforts, as well as the building of the Koch Institute facility, the Koch Institute Frontier Research Program, and other activities.

Jacks first became interested in cancer as a Harvard University undergraduate while attending a lecture by Robert Weinberg, the Daniel K. Ludwig Professor of Cancer Research and member of the Whitehead Institute, who is himself a pioneer in cancer genetics. After earning his PhD at the University of California at San Francisco under the direction of Nobel laureate Harold Varmus, Jacks joined Weinberg’s lab as a postdoctoral fellow. He joined the MIT faculty in 1992 with appointments in the Center for Cancer Research and the Department of Biology.

Jacks is widely considered a leader in the development of engineered mouse models of human cancers, and has pioneered the use of gene-targeting technology to construct mouse models and to study cancer-associated genes in mice. Strains of mice developed in his lab are used by researchers around the world, as well as by neighboring labs within the Koch Institute. Because these models closely resemble human forms of the disease, they have allowed researchers to track how tumors progress and to test new ways to detect and treat cancer. In more recent research, Jacks has been using mouse models to investigate how immune and tumor cells interact during cancer development and how tumors successfully evade immune recognition. This research is expected to lead to new immune-based therapies for human cancer.

Outside his research and MIT leadership, Jacks co-chaired the Blue Ribbon Panel for the National Cancer Moonshot Initiative, chaired the National Cancer Advisory Board of the National Cancer Institute, and is a past president of the American Association for Cancer Research. He is an elected member of the National Academy of Science, the National Academy of Medicine and the American Academy of Arts and Sciences. Jacks serves on the Board of Directors of Amgen and Thermo Fisher Scientific. He is also a co-founder of T2 Biosystems and Dragonfly Therapeutics, serves as an advisor to several other companies, and is a member of the Harvard Board of Overseers.

Sharp will lead the search for the next director of the Koch Institute, with guidance from noted leaders in MIT’s cancer research community, including Hockfield and Hynes, as well as Angela M. Belcher, head of the Department of Biological Engineering and Jason Mason Crafts Professor; Paula T. Hammond, head of the Department of Chemical Engineering and David H. Koch Professor of Engineering; Amy Keating, professor of biology; Robert S. Langer, David H. Koch Institute Professor; and David M. Sabatini, Professor of Biology and member, Whitehead Institute for Biomedical Research.

“Jacks is a renowned scientist whose personal research has changed the prevention and treatment of cancer,” says Sharp. “His contributions to the creation of the Koch Institute for Integrative Cancer Research and his leadership as its inaugural director have also transformed cancer research at MIT and nationally. By integrating engineers and cancer biologists into a community that shares knowledge and skills, and collaborates with clinical scientists and the private sector, this convergent institute represents the future of biological research in the MIT style.”

After Jacks steps down, he will continue his research in the areas of cancer genetics and immune-oncology and his teaching, while also stewarding the Bridge Project into its second decade.

“It has been a privilege for me to serve as director of the MIT Center for Cancer Research and the Koch Institute for the past two decades and to work alongside many of the brightest minds in cancer research,” says Jacks. “The Koch Institute is a powerhouse of research and innovation, and I look forward to the next generation of leadership in this very special place.”



from MIT News https://ift.tt/3jwr9Hs
via Gabe's Musing's

Sunday, October 25, 2020

Silencing gene expression to cure complex diseases

Many people think of new medicines as bullets, and in the pharmaceutical industry, frequently used terms like “targets” and “hits” reinforce that idea. Immuneering co-founder and CEO Ben Zeskind ’03, PhD ’06 prefers a different analogy.

His company, which specializes in bioinformatics and computational biology, sees many effective drugs more like noise-canceling headphones.

Rather than focusing on the DNA and proteins involved in a disease, Immuneering focuses on disease-associated gene signaling and expression data. The company is trying to cancel out those signals like a pair of headphones blocks out unwanted background noise.

The approach is guided by Immuneering’s decade-plus of experience helping large pharmaceutical companies understand the biological mechanisms behind some of their most successful medicines.

“We started noticing some common patterns in terms of how these very successful drugs were working, and eventually we realized we could use these insights to create a platform that would let us identify new medicine,” Zeskind says. “[The idea is] to not just make existing medicines work better but also to create entirely new medicines that work better than anything that has come before.”

In keeping with that idea, Immuneering is currently developing a bold pipeline of drugs aimed at some of the most deadly forms of cancer, in addition to other complex diseases that have proven difficult to treat, like Alzheimer’s. The company’s lead drug candidate, which targets a protein signaling pathway associated with many human cancers, will begin clinical trials within the year.

It’s the first of what Immuneering hopes will be a number of clinical trials enabled by what the company calls its “disease-canceling technology,” which analyzes the gene expression data of diseases and uses computational models to identify small-molecule compounds likely to bind to disease pathways and silence them.

“Our most advanced candidates go after the RAS-RAF-MEK [protein] pathway,” Zeskind explains. “This is a pathway that’s activated in about half of all human cancers. This pathway is incredibly important in a number of the most serious cancers: pancreatic, colorectal, melanoma, lung cancer — a lot of the cancers that have proven tougher to go after. We believe this is one of the largest unsolved problems in human cancer.”

A good foundation

As an undergraduate, Zeskind participated in the MIT $100K Entrepreneurship Competition (the $50K back then) and helped organize some of the MIT Enterprise Forum’s events around entrepreneurship.

“MIT has a unique culture around entrepreneurship,” Zeskind says. “There aren’t many organizations that encourage it and celebrate it the way MIT does. Also, the philosophy of the biological engineering department, of taking problems in biology and analyzing them quantitatively and systematically using principles of engineering, that philosophy really drives our company today.”

Although his PhD didn’t focus on bioinformatics, Zeskind’s coursework did involve some computational analysis and offered a primer on oncology. One course in particular, taught by Doug Lauffenburger, the Ford Professor of Biological Engineering, Chemical Engineering, and Biology, resonated with him. The class tasked students with uncovering some of the mechanisms of the interleukin-2 (IL-2) protein, a molecule found in the immune system that’s known to severely limit tumor growth in a small percentage of people with certain cancers.

After Zeskind earned his MBA at Harvard Business School in 2008, he returned to MIT’s campus to talk to Lauffenburger about his idea for a company that would decipher the reasons for IL-2’s success in certain patients. Lauffenburger would go on to join Immuneering’s advisory board.

Of course, due to the financial crisis of 2007-08, that proved to be difficult timing for launching a startup. Without easy access to capital, Zeskind approached pharmaceutical companies to show them some of the insights his team had gained on IL-2. The companies weren’t interested in IL-2, but they were intrigued by Immuneering’s process for uncovering the way it worked.

“At first we thought, ‘We just spent a year figuring out IL-2 and now we have to start from scratch,’” Zeskind recalls. “But then we realized it would be easier the second time around, and that was a real turning point because we realized the company wasn’t about that specific medicine, it was about using data to figure out mechanism.”

In one of the company’s first projects, Immuneering uncovered some of the mechanisms behind an early cancer immunotherapy developed by Bristol-Myers Squibb. In another, they studied the workings of Teva Pharmaceuticals’ drug for multiple sclerosis.

As Immuneering continued working on successful drugs, they began to notice some counterintuitive patterns.

“A lot of the conventional wisdom is to focus on DNA,” Zeskind says. “But what we saw over and over across many different projects was that transcriptomics, or which genes are turned on when — something you measure through RNA levels — was the thing that was most frequently informative about how a drug was working. That ran counter to conventional wisdom.”

In 2018, as Immuneering continued helping companies appreciate that idea in drugs that were already working, it decided to start developing medicines designed from the start to go after disease signals.

Today the company has drug pipelines focused around oncology, immune-oncology, and neuroscience. Zeskind says its disease-canceling technology allows Immuneering to launch new drug programs about twice as fast and with about half the capital as other drug development programs.

“As long as we have a good gene-expression signature from human patient data for a particular disease, we’ll find targets and biological insights that let us go after them in new ways,” he says. “It’s a systematic, quantitative, efficient way to get those biological insights compared to a more traditional process, which involves a lot of trial and error.”

An inspired path

Even as Immuneering advances its drug pipelines, its bioinformatics services business continues to grow. Zeskind attributes that success to the company’s employees, about half of which are MIT alumni — the continuation of trend that began in the early days of the company, when Immuneering was mostly made up of recent MIT PhD graduates and postdocs.

“We were sort of the Navy Seals of bioinformatics, if you will,” Zeskind says. “We’d come in with a small but incredibly well-trained team that knew how to make the most of the data they had available.”

In fact, it’s not lost on Zeskind that his analogy of drugs as noise-canceling headphones has a distinctively MIT spin: He was inspired by longtime MIT professor and Bose Corporation founder Amar Bose.

And Zeskind’s attraction to MIT came long before he ever stepped foot on campus. Growing up, his father, Dale Zeskind ’76, SM ’76, encouraged Ben and his sister Julie ’01, SM ’02 to attend MIT.

Unfortunately, Dale passed away recently after a battle with cancer. But his influence, which included helping to spark a passion for entrepreneurship in his son, is still being felt. Other members of Immuneering’s small team have also lost parents to cancer, adding a personal touch to the work they do every day.

“Especially in the early days, people were taking more risk [joining us over] a large pharma company, but they were having a bigger impact,” Zeskind says. “It’s all about the work: looking at these successful drugs and figuring out why they’re better and seeing if we can improve them.”

Indeed, even as Immuneering’s business model has evolved over the last 12 years, the company has never wavered in its larger mission.

“There’s been a ton of great progress in medicine, but when someone gets a cancer diagnosis, it’s still, more likely than not, very bad news,” Zeskind says. “It’s a real unsolved problem. So by taking a counterintuitive approach and using data, we’re really focused on bringing forward medicines that can have the kind of durable responses that inspired us all those years ago with IL-2. We’re really excited about the impact the medicines we’re developing are going to have.”



from MIT News https://ift.tt/3jpBKE5
via Gabe's Musing's

Thursday, October 22, 2020

Yogesh Surendranath wants to decarbonize our energy systems

Electricity plays many roles in our lives, from lighting our homes to powering the technology and appliances we rely on every day. Electricity can also have a major impact at the molecular scale, by powering chemical reactions that generate useful products.

Working at that molecular level, MIT chemistry professor Yogesh Surendranath harnesses electricity to rearrange chemical bonds. The electrochemical reactions he is developing hold potential for process such as splitting water into hydrogen fuel, creating more efficient fuel cells, and converting waste products like carbon dioxide into useful fuels.

“All of our research is about decarbonizing the energy ecosystem,” says Surendranath, who recently earned tenure in MIT’s Department of Chemistry and serves as the associate director of the Carbon Capture, Utilization, and Storage Center, one of the Low-Carbon Energy Centers run by the MIT Energy Initiative (MITEI).

Although his work has many applications in improving energy efficiency, most of the research projects in Surendranath’s group have grown out of the lab’s fundamental interest in exploring, at a molecular level, the chemical reactions that occur between the surface of an electrode and a liquid.

“Our goal is to uncover the key rate-limiting processes and the key steps in the reaction mechanism that give rise to one product over another, so that we can, in a rational way, control a material's properties so that it can most selectively and efficiently carry out the overall reaction,” he says.

Energy conversion

Born in Bangalore, India, Surendranath moved to Kent, Ohio, with his parents when he was 3 years old. Bangalore and Kent happen to have the world’s leading centers for studying liquid crystal materials, the field that Surendranath’s father, an organic chemist, specialized in.

“My dad would often take me to the laboratory, and although my parents encouraged me to pursue medicine, I think my interest in science and chemistry probably was sparked at an early age, by those experiences,” Surendranath recalls.

Although he was interested in all of the sciences, he narrowed his focus after taking his first college chemistry class at the University of Virginia, with a professor named Dean Harman. He decided on a double major in chemistry and physics and ended up doing research in Harman’s inorganic chemistry lab.

After graduating from UVA, Surendranath came to MIT for graduate school, where his thesis advisor was then-MIT professor Daniel Nocera. With Nocera, he explored using electricity to split water as a way of renewably generating hydrogen. Surendranath’s PhD research focused on developing methods to catalyze the half of the reaction that extracts oxygen gas from water.

He got even more involved in catalyst development while doing a postdoctoral fellowship at the University of California at Berkeley. There, he became interested in nanomaterials and the reactions that occur at the interfaces between solid catalysts and liquids.

“That interface is where a lot of the key processes that are involved in energy conversion occur in electrochemical technologies like batteries, electrolyzers, and fuel cells,” he says.

In 2013, Surendranath returned to MIT to join the faculty, at a time when many other junior faculty members were being hired.

“One of the most attractive features of the department is its balanced composition of early career and senior faculty. This has created a nurturing and vibrant atmosphere that is highly collaborative,” he says. “But more than anything else, it was the phenomenal students at MIT that drew me back. Their intensity and enthusiasm is what drives the science.”

Fuel decarbonization

Among the many electrochemical reactions that Surendranath’s lab is trying to optimize is the conversion of carbon dioxide to simple chemical fuels such as carbon monoxide, ethylene, or other hydrocarbons. Another project focuses on converting methane that is burned off from oil wells into liquid fuels such as methanol.

“For both of those areas, the idea is to convert carbon dioxide and low-carbon feedstocks into commodity chemicals and fuels. These technologies are essential for decarbonizing the chemistry and fuels sector,” Surendranath says.

Other projects include improving the efficiency of catalysts used for water electrolysis and fuel cells, and for producing hydrogen peroxide (a versatile disinfectant). Many of those projects have grown out of his students’ eagerness to chase after difficult problems and follow up on unexpected findings, Surendranath says.

“The true joy of my time here, in addition to the science, has been about seeing students that I've mentored grow and mature to become independent scientists and thought leaders, and then to go off and launch their own independent careers, whether it be in industry or in academia,” he says. “That role as a mentor to the next generation of scientists in my field has been extraordinarily rewarding.”

Although they take their work seriously, Surendranath and his students like to keep the mood light in their lab. He often brings mangoes, coconuts, and other exotic fruits in to share, and enjoys flying stunt kites — a type of kite that has multiple lines, allowing them to perform acrobatic maneuvers such as figure eights. He can also occasionally be seen making balloon animals or blowing extremely large soap bubbles.

“My group has really cultivated an extraordinarily positive, collaborative, uplifting environment where we go after really hard problems, and we have a lot of fun along the way,” Surendranath says. “I feel blessed to work with people who have invested so much in the research effort and have built a culture that is such a pleasure to work in every day.”



from MIT News https://ift.tt/2HtaWpq
via Gabe's Musing's

Wednesday, October 21, 2020

“What to Expect When You’re Expecting Robots”

As Covid-19 has made it necessary for people to keep their distance from each other, robots are stepping in to fill essential roles, such as sanitizing warehouses and hospitals, ferrying test samples to laboratories, and serving as telemedicine avatars.

There are signs that people may be increasingly receptive to robotic help, preferring, at least hypothetically, to be picked up by a self-driving taxi or have their food delivered via robot, to reduce their risk of catching the virus.

As more intelligent, independent machines make their way into the public sphere, engineers Julie Shah and Laura Major are urging designers to rethink not just how robots fit in with society, but also how society can change to accommodate these new, “working” robots.

Shah is an associate professor of aeronautics and astronautics at MIT and the associate dean of social and ethical responsibilities of computing in the MIT Schwarzman College of Computing. Major SM ’05 is CTO of Motional, a self-driving car venture supported by automotive companies Hyundai and Aptiv. Together, they have written a new book, “What to Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration,” published this month by Basic Books.

What we can expect, they write, is that robots of the future will no longer work for us, but with us. They will be less like tools, programmed to carry out specific tasks in controlled environments, as factory automatons and domestic Roombas have been, and more like partners, interacting with and working among people in the more complex and chaotic real world. As such, Shah and Major say that robots and humans will have to establish a mutual understanding.

“Part of the book is about designing robotic systems that think more like people, and that can understand the very subtle social signals that we provide to each other, that make our world work,” Shah says. “But equal emphasis in the book is on how we have to structure the way we live our lives, from our crosswalks to our social norms, so that robots can more effectively live in our world.”

Getting to know you

As robots increasingly enter public spaces, they may do so safely if they have a better understanding of human and social behavior.

Consider a package delivery robot on a busy sidewalk: The robot may be programmed to give a standard berth to obstacles in its path, such as traffic cones and lampposts. But what if the robot is coming upon a person wheeling a stroller while balancing a cup of coffee? A human passerby would read the social cues and perhaps step to the side to let the stroller by. Could a robot pick up the same subtle signals to change course accordingly?

Shah believes the answer is yes. As head of the Interactive Robotics Group at MIT, she is developing tools to help robots understand and predict human behavior, such as where people move, what they do, and who they interact with in physical spaces. She’s implemented these tools in robots that can recognize and collaborate with humans in environments such as the factory floor and the hospital ward. She is hoping that robots trained to read social cues can more safely be deployed in more unstructured public spaces.

Major, meanwhile, has been helping to make robots, and specifically self-driving cars, work safely and reliably in the real world, beyond the controlled, gated environments where most driverless cars operate today. About a year ago, she and Shah met for the first time, at a robotics conference.

“We were working in parallel universes, me in industry, and Julie in academia, each trying to galvanize understanding for the need to accommodate machines and robots,” Major recalls.

From that first meeting, the seeds for their new book began quickly to sprout.

A cyborg city

In their book, the engineers describe ways that robots and automated systems can perceive and work with humans — but also ways in which our environment and infrastructure can change to accommodate robots.

A cyborg-friendly city, engineered to manage and direct robots, could avoid scenarios such as the one that played out in San Francisco in 2017. Residents there were seeing an uptick in delivery robots deployed by local technology startups. The robots were causing congestion on city sidewalks and were an unexpected hazard to seniors with disabilities. Lawmakers ultimately enforced strict regulations on the number of delivery robots allowed in the city — a move that improved safety, but potentially at the expense of innovation.

If in the near future there are to be multiple robots sharing a sidewalk with humans at any given time, Shah and Major propose that cities might consider installing dedicated robot lanes, similar to bike lanes, to avoid accidents between robots and humans. The engineers also envision a system to organize robots in public spaces, similar to the way airplanes keep track of each other in flight.

In 1965, the Federal Aviation Agency was created, partly in response to a catastrophic crash between two planes flying through a cloud over the Grand Canyon. Prior to that crash, airplanes were virtually free to fly where they pleased. The FAA began organizing airplanes in the sky through innovations like the traffic collision avoidance system, or TCAS — a system onboard most planes today, that detects other planes outfitted with a universal transponder. TCAS alerts the pilot of nearby planes, and automatically charts a path, independent of ground control, for the plane to take in order to avoid a collision.

Similarly, Shah and Major say that robots in public spaces could be designed with a sort of universal sensor that enables them to see and communicate with each other, regardless of their software platform or manufacturer. This way, they might stay clear of certain areas, avoiding potential accidents and congestion, if they sense robots nearby.

“There could also be transponders for people that broadcast to robots,” Shah says. “For instance, crossing guards could use batons that can signal any robot in the vicinity to pause so that it’s safe for children to cross the street.”

Whether we are ready for them or not, the trend is clear: The robots are coming, to our sidewalks, our grocery stores, and our homes. And as the book’s title suggests, preparing for these new additions to society will take some major changes, in our perception of technology, and in our infrastructure.

“It takes a village to raise a child to be a well-adjusted member of society, capable of realizing his or her full potential,” write Shah and Major. “So, too, a robot.”



from MIT News https://ift.tt/3dOUZWM
via Gabe's Musing's

Tuesday, October 20, 2020

Bringing construction projects to the digital world

People who work behind a computer screen all day take it for granted that everyone’s work will be tracked and accessible when they collaborate with others. But if your job takes place out in the real world, managing projects can require a lot more effort.

In construction, for example, general contractors and real estate developers often need someone to be physically present on a job site to verify work is done correctly and on time. They might also rely on a photographer or smartphone images to document a project’s progress. Those imperfect solutions can lead to accountability issues, unnecessary change orders, and project delays.

Now the startup OpenSpace is bringing some of the benefits of digital work to the real world with a solution that uses 360-degree cameras and computer vision to create comprehensive, time-stamped digital replicas of construction sites.

All customers need to do is walk their job site with a small 360-degree camera on their hard hat. The OpenSpace Vision Engine maps the photos to work plans automatically, creating a Google Streetview-like experience for people to remotely tour work sites at different times as if they were physically present.

The company is also deploying analytics solutions that help customers track progress and search for objects on their job sites. To date, OpenSpace has helped customers map more than 1.5 billion square feet of construction projects, including bridges, hospitals, football stadiums, and large residential buildings.

The solution is helping workers in the construction industry improve accountability, minimize travel, reduce risks, and more.

“The core product we have today is a simple idea: It allows our customers to have a complete visual record of any space, indoor or outdoor, so they can see what’s there from anywhere at any point in time,” says OpenSpace cofounder and CEO Jeevan Kalanithi SM ’07. “They can teleport into the site to inspect the actual reality, but they can also see what was there yesterday or a week ago or five years ago. It brings this ground truth record to the site.”

Shining a light on construction sites

The founders of OpenSpace originally met during their time at MIT. At the Media Lab, Kalanithi and David Merrill SM ’06, PhD ’09 built a gaming system based on small cubes that used LCD touch screens and motion sensors to encourage kids to develop critical thinking skills. They spun the idea into a company, Sifteo, which created multiple generations of its toys.

In 2014, Sifteo was bought by 3D Robotics, then a drone company that would go on to focus on drone inspection software for construction, engineering, and mining firms. Kalanithi stayed with 3D Robotics for over two years, eventually serving as president of the company.

In the summer of 2016, Kalanithi left 3D Robotics with the intention of spending more time with friends and family. He reconnected with two friends from MIT, Philip DeCamp ’05, SM ’08, PhD ’13 and Michael Fleischman PhD ’08, who had researched new machine vision and AI techniques in their PhD research. Fleischman had started a social media analytics company he sold to Twitter.

At the time, DeCamp and Fleischman were considering ways to use machine vision advances with 360-degree cameras. Kalanithi, who had helped guide 3D Robotics toward the construction industry, thought he had the perfect application.

People have long used photographs to document construction projects, and many times contracts for large construction projects require photos of progress to be taken. But the photos never document the entire site, and they aren’t taken frequently enough to capture every phase of work.

Early versions of the OpenSpace solution required someone to set up a tripod in every space of a construction project. A breakthrough came when one early user, a straight-talking project manager, gave the founders some useful feedback.

“I was showing him the output of our product at the time, which looks similar to now, and he says, ‘This is great. How long did it take you?’ When I told him he said, ‘Well that’s cool Jeevan, but there’s no way we’re going to use that,’” Kalanithi recalls. “I thought maybe this idea isn’t so good after all. But then he gave us the idea. He said, ‘What would be great is if I could just wear that little camera and walk around. I walk around the job site all the time.’”

The founders took the advice and repurposed their solution to work with off-the-shelf 360-degree cameras and slightly modified hard hats. The cameras take pictures every half second and use artificial intelligence techniques to identify the camera’s precise location, even indoors. Once a few tours of the job site have been uploaded to OpenSpace’s platform, it can map pictures onto site plans within 15 minutes.

Kalanithi still remembers the excitement the founders felt the first time they saved a customer money, helping to settle a dispute between a general contractor and a drywall specialist. Since then they’ve gotten a lot of those calls, in some cases saving companies millions of dollars. Kalanithi says saving builders costs helps the construction industry meet growing needs related to aging infrastructure and housing shortages.

Helping nondigital workers

OpenSpace’s analytics solutions, which the company calls its ClearSight suite of products, have not been rolled out to every customer yet. But Kalanithi believes they will bring even more value to people managing work sites.

“If you have someone walking around the project all the time, we can start classifying and computing what they’re seeing,” Kalanithi says. “So, we can see how much framing and drywall is being installed, how quickly, how much material was used. That’s the basis for how people get paid in this industry: How much work did you do?”

Kalanithi believes Clearsight is the beginning of a new phase for OpenSpace, where the company can use AI and computer vision to give customers a new perspective on what’s going on at their job site.

“The product experience today, where you look around to see the site, will be something people sometimes do on OpenSpace, but they may be spending more time looking at productivity charts and little OpenSpace verified payment buttons, and maybe sometimes they’ll drill down to look at the actual images,” Kalanithi says.

The Covid-19 pandemic accelerated some companies’ adoption of digital solutions to help cut down on travel and physical contact. But even in states that have resumed construction, Kalanithi says customers are continuing to use OpenSpace, a key indicator of the value it brings.

Indeed, the vast majority of the information captured by OpenSpace was never available before, and it brings with it the potential for major improvements in the construction industry and beyond.

“If the last decade was defined by the cloud and mobile technology being the real enabling technologies, I think this next decade will be innovations that affect people in the real physical world,” Kalanithi says. “Because cameras and computer vision are getting better, so for a lot of people who have been ignored or left behind by technology based on the work they do, we’ll have the opportunity to make some amends and build some stuff that will make those folks lives easier.”



from MIT News https://ift.tt/35i7XZ5
via Gabe's Musing's

Translating lost languages using machine learning

Recent research suggests that most languages that have ever existed are no longer spoken. Dozens of these dead languages are also considered to be lost, or “undeciphered” — that is, we don’t know enough about their grammar, vocabulary, or syntax to be able to actually understand their texts.

Lost languages are more than a mere academic curiosity; without them, we miss an entire body of knowledge about the people who spoke them. Unfortunately, most of them have such minimal records that scientists can’t decipher them by using machine-translation algorithms like Google Translate. Some don’t have a well-researched “relative” language to be compared to, and often lack traditional dividers like white space and punctuation. (To illustrate, imaginetryingtodecipheraforeignlanguagewrittenlikethis.)

However, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently made a major development in this area: a new system that has been shown to be able to automatically decipher a lost language, without needing advanced knowledge of its relation to other languages. They also showed that their system can itself determine relationships between languages, and they used it to corroborate recent scholarship suggesting that the language of Iberian is not actually related to Basque.

The team’s ultimate goal is for the system to be able to decipher lost languages that have eluded linguists for decades, using just a few thousand words.

Spearheaded by MIT Professor Regina Barzilay, the system relies on several principles grounded in insights from historical linguistics, such as the fact that languages generally only evolve in certain predictable ways. For instance, while a given language rarely adds or deletes an entire sound, certain sound substitutions are likely to occur. A word with a “p” in the parent language may change into a “b” in the descendant language, but changing to a “k” is less likely due to the significant pronunciation gap.

By incorporating these and other linguistic constraints, Barzilay and MIT PhD student Jiaming Luo developed a decipherment algorithm that can handle the vast space of possible transformations and the scarcity of a guiding signal in the input. The algorithm learns to embed language sounds into a multidimensional space where differences in pronunciation are reflected in the distance between corresponding vectors. This design enables them to capture pertinent patterns of language change and express them as computational constraints. The resulting model can segment words in an ancient language and map them to counterparts in a related language.  

The project builds on a paper Barzilay and Luo wrote last year that deciphered the dead languages of Ugaritic and Linear B, the latter of which had previously taken decades for humans to decode. However, a key difference with that project was that the team knew that these languages were related to early forms of Hebrew and Greek, respectively.

With the new system, the relationship between languages is inferred by the algorithm. This question is one of the biggest challenges in decipherment. In the case of Linear B, it took several decades to discover the correct known descendant. For Iberian, the scholars still cannot agree on the related language: Some argue for Basque, while others refute this hypothesis and claim that Iberian doesn’t relate to any known language. 

The proposed algorithm can assess the proximity between two languages; in fact, when tested on known languages, it can even accurately identify language families. The team applied their algorithm to Iberian considering Basque, as well as less-likely candidates from Romance, Germanic, Turkic, and Uralic families. While Basque and Latin were closer to Iberian than other languages, they were still too different to be considered related. 

In future work, the team hopes to expand their work beyond the act of connecting texts to related words in a known language — an approach referred to as “cognate-based decipherment.” This paradigm assumes that such a known language exists, but the example of Iberian shows that this is not always the case. The team’s new approach would involve identifying semantic meaning of the words, even if they don’t know how to read them. 

“For instance, we may identify all the references to people or locations in the document which can then be further investigated in light of the known historical evidence,” says Barzilay. “These methods of ‘entity recognition’ are commonly used in various text processing applications today and are highly accurate, but the key research question is whether the task is feasible without any training data in the ancient language.”      .

The project was supported, in part, by the Intelligence Advanced Research Projects Activity (IARPA).



from MIT News https://ift.tt/3ojj6lh
via Gabe's Musing's