Upcoming Events & Important Dates
|
|
|
|
|
Important Dates & Reminders Friday, February 10, 2023: Last day to drop a class for Winter via CAESAR without a W Monday, February 20, 2023: Registration for Spring Quarter begins Monday, March 13, 2023: Winter Examinations Begin Saturday, March 18, 2023: Winter Examinations End/Spring Break Begins Monday, March 20, 2023: Winter grades due at 3 p.m. ------ TGS students who wish to graduate in Winter 2023 must meet the following deadlines: Friday, February 24: Deadline for TGS to receive program approval of PhD Final Exam forms via GSTS, Dissertations via ProQuest, and change of grade forms for any outstanding Y/K/X/NR grades. Friday, March 10: Deadline for TGS to receive program approval of Master’s Degree Completion forms via GSTS and change of grade forms for any outstanding Y/K/X/NR grades. For additional information about PhD and Master’s completion, please review your program handbook and The Graduate School requirements.
|
TwitterInstagram: Coming Please send any upcoming news and events to news@cs.northwestern.edu to be included in future bulletins. Events must be sent by Thursday 12PM to be featured in that week's bulletin, events received afterwards will be included at our discretion.
|
|
|
|
|
Monday / CS Seminar February 6th / 10:00 AM In Person / Mudd 3514 Title: Aligning Machine Learning, Law, and Policy for Responsible Real-World Deployments Speaker: Peter Henderson Abstract: Machine learning (ML) is being deployed to a vast array of real-world applications with profound impacts on society. ML can have positive impacts, such as aiding in the discovery of new cures for diseases and improving government transparency and efficiency. But it can also be harmful: reinforcing authoritarian regimes, scaling the spread of disinformation, and exacerbating societal biases. As we rapidly move toward systemic use of ML in the real world, there are many unanswered questions about how to successfully use ML for social good while preventing its potential harms. Many of these questions inevitably require pursuing a deeper alignment between ML, law, and policy. Are certain algorithms truly compliant with current laws and regulations? Is there a better design that can make them more in tune to the regulatory and policy requirements of the real world? Are laws, policies, and regulations sufficiently informed by the technical details of ML algorithms, or will they be ineffective and out-of-sync? In this talk, I will discuss ways to bring together ML, law, and policy to address these questions. I will draw on real-world examples throughout the talk, including a unique real-world collaboration with the Internal Revenue Service. I will show how investigating questions of alignment between ML, law, and policy can advance core research in ML, as well as how we might develop new algorithms to expand policy and regulatory options. It is my hope that the tools discussed in this talk will help us lead to more effective and responsible ways of deploying ML in the real world, so that we steer toward positive impacts and away from potential harms. Biography:
Peter Henderson is a joint JD-PhD (Computer Science) candidate at Stanford University advised by Dan Jurafsky. He is also an OpenPhilanthropy AI Fellow, a Graduate Student Fellow at the Stanford RegLab working closely with Daniel E. Ho, and technical advisor at the Institute of Security and Technology. Previously, he received his M.Sc. at McGill University, advised by Joelle Pineau and David Meger. In the past he has worked at the California Supreme Court, Amazon AWS & Alexa, and Meta Fundamental AI Research. //
|
Monday / CS Seminar February 6th / 12:00 PM In Person / Mudd 3514 Title: Toward Deep Semantic Understanding: Event-Centric Multimodal Knowledge Acquisition Speaker: Manling Li Abstract: Traditionally, multimodal information consumption has been entity-centric with a focus on concrete concepts (such as objects, object types, physical relations, e.g., a person in a car), but lacks ability to understand abstract semantics (such as events and semantic roles of objects, e.g., driver, passenger, mechanic). However, such event-centric semantics are the core knowledge communicated, regardless whether in the form of text, images, videos, or other data modalities. At the core of my research in Multimodal Information Extraction (IE) is to bring such deep semantic understanding ability to the multimodal world. My work opens up a new research direction Event-Centric Multimodal Knowledge Acquisition to transform traditional entity-centric single-modal knowledge into event-centric multi-modal knowledge. Such a transformation poses two significant challenges: (1) understanding multimodal semantic structures that are abstract (such as events and semantic roles of objects): I will present my solution of zero-shot cross-modal transfer (CLIP-Event), which is the first to model event semantic structures for vision-language pretraining, and supports zero-shot multimodal event extraction for the first time; (2) understanding long-horizon temporal dynamics: I will introduce Event Graph Model, which empowers machines to capture complex timelines, intertwined relations and multiple alternative outcomes. I will also show its positive results on long-standing open problems, such as timeline generation, meeting summarization, and question answering. Such Event-Centric Multimodal Knowledge starts the next generation of information access, which allows us to effectively access historical scenarios and reason about the future. I will lay out how I plan to grow a deep semantic understanding of language world and vision world, moving from concrete to abstract, from static to dynamic, and ultimately from perception to cognition. Biography: Manling Li is a Ph.D. candidate at the Computer Science Department of University of Illinois Urbana-Champaign. Her work on multimodal knowledge extraction won the ACL'20 Best Demo Paper Award, and the work on scientific information extraction from COVID literature won NAACL'21 Best Demo Paper Award. She was a recipient of Microsoft Research PhD Fellowship in 2021. She was selected as a DARPA Riser in 2022, and a EE CS Rising Star in 2022. She was awarded C.L. Dave and Jane W.S. Liu Award, and has been selected as a Mavis Future Faculty Fellow. She led 19 students to develop the UIUC information extraction system and ranked 1st in DARPA AIDA evaluation in 2019 and 2020. She has more than 30 publications on multimodal knowledge extraction and reasoning, and gave tutorials about event-centric multimodal knowledge at ACL'21, AAAI'21, NAACL'22, AAAI'23, etc. Additional information is available at limanling.github.io/. //
|
Wednesday / CS Seminar February 8th / 10:00 AM In Person / Mudd 3514 Title: Controlling Large Language Models: Generating (Useful) Text from Models We Don’t Fully Understand Speaker: Ari Holtzman Abstract: Generative language models have recently exploded in popularity, with services such as ChatGPT deployed to millions of users. These neural models are fascinating, useful, and incredibly mysterious: rather than designing what we want them to do, we nudge them in the right direction and must discover what they are capable of. But how can we rely on such inscrutable systems? This talk will describe a number of key characteristics we want from generative models of text, such as coherence and correctness, and show how we can design algorithms to more reliably generate text with these properties. We will also highlight some of the challenges of using such models, including the need to discover and name new and often unexpected emergent behavior. Finally, we will discuss the implications this has for the grand challenge of understanding models at a level where we can safely control their behavior. Biography:
Ari Holtzman is a PhD student at the University of Washington. His research has focused broadly on generative models of text: how we can use them and how can we understand them better. His research interests have spanned everything from dialogue, including winning the first Amazon Alexa Prize in 2017, to fundamental research on text generation, such as proposing Nucleus Sampling, a decoding algorithm used broadly in deployed systems such as the GPT-3 API and academic research. Ari completed an interdisciplinary degree at NYU combining Computer Science and the Philosophy of Language. //
|
|
|
|
Wednesday / CS Seminar February 8th / 12:00 PM In Person / Mudd 3514 Title: Cryptography, Security, and Law Speaker: Sunoo Park Abstract: My research focuses on the security, privacy, and transparency of technologies in societal and legal context. My talk will focus on three of my recent works in this space, relating to (1) preventing exploitation of stolen email data, (2) enhancing accountability in electronic surveillance, and (3) legal risks faced by security researchers. Biography: Sunoo Park is a Postdoctoral Fellow at Columbia University and Visiting Fellow at Columbia Law School. Her research interests range across cryptography, security, and technology law. She received her Ph.D. in computer science at MIT, her J.D. at Harvard Law School, and her B.A. in computer science at the University of Cambridge. She has also been affiliated with Cornell Tech's Digital Life Initiative, the Berkman Klein Center for Internet & Society at Harvard University, the MIT Media Lab's Digital Currency Initiative, and MIT's Internet Policy Research Initiative.
|
Tech Talk with Noah Levine Feb 20th | 4PM Zoom Noah Levine ('98) is the current Vice President, Advanced Advertising at Warner Bros. Discovery
|
|
|
|
|
|
|
RAISO Discussion: AI Ghost Work: The Invisible Human Labor beneath “Automated” Systems with Sirus Bouchat Feb 10th | 12:30PM University Hall 121 It is widely thought that AI systems can completely replace human labor. A more accurate picture reveals that AI innovation is powered by dispersed, underpaid workers. These invisible workers mine for lithium and minerals, maintain expansive data centers, label millions of pieces of training data, moderate content, and review and edit suboptimal AI outputs.
|
|
|
|
|
|
|
How might we incentivize companies to properly value their uncredited human labor? In our market economy, how might companies who value their human labor compete against those who don’t? This week, we will be joined by a guest facilitator and NU professor, Sirus Bouchat. Bouchat researches political methodology + machine learning and currently teaches POLISCI 390: Ethical AI & the Politics of Tech. See you at University Hall 121 on Friday, 12:30PM to chat and drink Kung Fu Tea! Location: University Hall 121 (Zoom accessibility: express interest in the RSVP!) Date + Time: Friday, February 10th, 12:30PM Food: Kung Fu Tea (RSVP’s only!) RSVP: https://forms.gle/sXiFA7ATGGr4dTpn8
RAISO socials Instagram: @Raisogram Twitter: @raisotweets RAISO Website: https://www.raiso.org
|
Abstract: Population health screening can be an effective approach for identifying diseases at early stages to enable timely treatment and to improve long-term health outcomes. The rationale is simple, but how to develop efficient screening policies at the population level while considering limited resources is nontrivial. This talk will discuss several studies we have conducted in modeling and optimizing screening policies in a variety of clinical contexts across infectious diseases, chronic diseases, and developmental disabilities. We will discuss the policy insights for screening—either common or unique to these clinical contexts—that are drawn from our modeling analyses and future research opportunities in the emerging era of big healthcare data. Bio: Qiushi Chen is an Assistant Professor in the Harold and Inge Marcus Department of Industrial and Manufacturing Engineering at the Pennsylvania State University. He earned his PhD degree in Operations Research from the Georgia Institute of Technology in 2016 and completed his post-doctoral training at Massachusetts General Hospital and Harvard Medical School. His research has focused on utilizing innovative mathematical modeling, optimization, and data analytics tools, integrated with real-world healthcare datasets, to better inform clinical decisions and policy makings in broad healthcare settings. He has been collaborating closely with researchers from medicine, health policy, health economics, and social sciences in multidisciplinary research projects on modeling for chronic diseases, behavioral and mental health, and opioid and substance use. His work has been supported by research grants from the National Institutes of Health. He is currently serving on the Council of INFORMS Health Applications Society.
|
Feb 22nd | 6:15-7:15 PM The Garage VentureCat, Northwestern’s annual student startup competition, will be held on Wednesday, May 31, 2023 where Northwestern’s most promising student-founded startups will compete for over $300,000+ in non-dilutive prize money. Interested in learning how to apply and compete? Register to attend our info session event on February 9 at 5:15 PM at The Garage. RSVP Here
|
|
|
|
|
|
|
WildHacks April 15th - 16th We would like to invite you to Northwestern WildHacks 2023 from Saturday to Sunday, April 15-16, 2023. WildHacks is a 36-hour hackathon that is 100% free to attend. All levels of experience, backgrounds, and majors are welcome, and no prior programming experience is needed! If you’re interested in participating in WildHacks 2023, fill out this interest form to stay up to date with info about the event, including when the official registration drops in early February!
Check out wildhacksnu.com for more details or reach out to wildhacks@northwestern.edu with any questions!
|
|
|
|
|
|
|
|
|
|
Northwestern CS Launches New Research Track
|
The new research track is designed to provide second-year students majoring in computer science with a structured and mentored introduction to the research process. Read More
|
|
|
|
|
|
Showcasing Early-Career Researchers in Theoretical Computer Science
|
The Northwestern CS Theory Group and Toyota Technological Institute at Chicago co-hosted the Junior Theorists Workshop on January 5-6. Read More
|
|
|
|
|
|
Advancing Security and Privacy Education
|
A Q&A with Northwestern Computer Science assistant professor of instruction Sruti Bhagavatula. Read More
|
|
|
|
|
|
Michael Horn Receives Daniel I. Linzer Award
|
The award recognizes excellence in diversity, inclusivity and equity at Northwestern. Read More
|
|
|
|
|
|
© Robert R. McCormick School of Engineering and Applied Science, Northwestern University
|
|
|
|
Northwestern Department of Computer Science Mudd Hall, 2233 Tech Drive, Third Floor, Evanston, Illinois, 60208 Unsubscribe
|
|
|
|
|
|