##SENDGRIDOPENTRACKING##
Email not displaying correctly? View in browser
 

Bulletin #4 Friday 26th, January, 2024

 

Important Dates & Reminders

Friday, February 9, 2024 Last day to drop a FULL-TERM class for Winter via CAESAR. Requests after this date result in a W.

Monday, February 19, 2024 Registration for Spring 2024 begins

 

Saturday, March 9, 2024 Winter Classes End

Monday, March 11, 2024 Winter Examinations Begin

Saturday, March 16, 2024 Spring Break Begins

 

We want to hear from you! Please send any upcoming news and events to news@cs.northwestern.edu to be included in future bulletins &/featured on our socials/website.

Events must be emailed at least one (1) week in advance.

 
View Page»
 

In this Issue

Upcoming Seminars:

Monday 29th January

"Making Machine Learning Predictably Reliable" (Andrew Ilyas)

 

Wednesday 31st January

"Rethinking Data Use in Large Language Models" (Sewon Min)

 

Monday 5th February

"TBA" (June Vuong)

 

Wednesday 7th February

"Integrating Theory and Practice in Complex Privacy-Preserving Data Analysis" (Wanrong Zhang)

 

CS Events:

CSPAC Workshop Series | Jan 30 & Feb 13th

 

Northwestern Events

 

News

 

Upcoming CS Seminars

Missed a seminar? No worries! View past seminars via the Northwestern CS Website (northwestern login required).

View Past Seminars
 

January

29th  -  Andrew Ilyas

31st - Sewon Min

 

February

5th - June Vuong

7th - Wanrong Zhang

12th - Bento Natura

14th - Abhishek Shetty

 

 

Monday / CS Seminar
January 29th / 12:00 PM

In Person/ Mudd 3514

"Making Machine Learning Predictably Reliable"

 

Abstract

Despite ML models' impressive performance, training and deploying them is currently a somewhat messy endeavor. But does it have to be? In this talk, I overview my work on making ML “predictably reliable”---enabling developers to know when their models will work, when they will fail, and why. 

 

To begin, we use a case study of adversarial inputs to show that human intuition can be a poor predictor of how ML models operate. Motivated by this, we present a line of work that aims to develop a precise understanding of the ML pipeline, combining statistical tools with large-scale experiments to characterize the role of each individual design choice: from how to collect data, to what dataset to train on, to what learning algorithm to use.

 

Biography

Andrew Ilyas is a PhD student at MIT, advised by Constantinos Daskalakis and Aleksander Madry. His main interest is in reliable machine learning, where he seeks to understand the effects of the individual design choices involved in building ML models. He was previously supported by an Open Philanthropy AI Fellowship.

 

Research Interests/Area

Machine Learning

Wednesday / CS Seminar
January 31st / 12:00 PM

In Person/ Mudd 3514

"Rethinking Data Use in Large Language Models"

 

Abstract

Large language models (LMs) such as ChatGPT have revolutionized natural language processing and artificial intelligence more broadly. In this talk, I will discuss my research on understanding and advancing these models, centered around how they use the very large text corpora they are trained on. First, I will describe our efforts to understand how these models learn to perform new tasks after training, demonstrating that their so-called in context learning capabilities are almost entirely determined by what they learn from the training data, challenging a widely held belief. Next, I will introduce a new class of LMs that fundamentally rethink how models use their training data. These new models—nonparametric LMs—include not only learned parameters but also massive text corpora, from which they retrieve information for improved accuracy and flexibility. I will describe my work establishing the foundations for such models, showing they are more performant with fewer parameters and can easily stay up-to-date. I will also discuss how they open up new avenues for responsible data use, e.g., by segregating permissive and copyrighted text and using them differently. Finally, I will envision the next generation of LMs we should build, focusing on efficient scaling, better information-seeking, and responsible data use.

 

Biography

Sewon Min is a Ph.D. candidate in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Her research focuses on language models (LMs): studying the science of LMs, and designing new model classes and learning methods that make LMs more performant and flexible. She also studies LMs in information-seeking, legal, and privacy contexts. She is a co-organizer of multiple tutorials and workshops, including most recently at ACL 2023 on Retrieval-based Language Models and Applications and upcoming at ICLR 2024 on Mathematical and Empirical Understanding of Foundation Models. She won a paper award at ACL 2023 and a J.P. Morgan Fellowship.

 

Research Interests/Area

Natural language processing; Machine learning

Wednesday / CS Seminar
February 7th / 12:00 PM

Hybrid / Mudd 3514

"Integrating Theory and Practice in Complex Privacy-Preserving Data Analysis"

 

Abstract

With growing concerns about large-scale data collection and surveillance, the development of privacy-preserving tools can help alleviate public fears about the misuse of personal data. The field of differential privacy (DP) offers powerful data analysis tools that provide worst-case privacy guarantees. However, the transition of differential privacy from academic research to practice introduces many new technical challenges, which range from fundamental theory to large-scale deployment and broader privacy concerns. 


In this talk, I will discuss my research efforts in tackling two major challenges: 1 How to reason about more complex privacy accounting? 2. How to identify and protect privacy beyond DP’s individual-oriented focus? 


To answer the first question, I will show the optimal composition theorems for concurrently composing multiple interactive mechanisms, even in scenarios where we adaptively select privacy-loss parameters and create new mechanisms. These results offer strong theoretical foundations for enabling full adaptivity and interactivity in DP systems. Towards the second challenge, I will talk about a broader non-individual privacy for protecting sensitive global properties of a dataset, e.g., proprietary information or IP contained in the data. I will introduce an attack to identify the dataset-level privacy vulnerabilities; and a solution, consisting of a theoretical framework to capture this global property privacy and mechanisms to achieve that. I will conclude the talk with my future directions.

 

Biography

Wanrong Zhang is an NSF Computing Innovation Fellow in the Theory of Computing group at Harvard John A. Paulson School of Engineering and Applied Sciences. She is also a member of the Harvard Privacy Tools/OpenDP project. Her primary focus is to address new challenges introduced by real-world deployments of differential privacy. Before joining Harvard, she received her Ph.D. from Georgia Institute of Technology. She is a recipient of a Best Paper Award at CCS (2023), a Computing Innovation Fellowship from CCC/CRA/NSF. She was selected as a rising star in EECS in 2022 and a rising star in Data Science in 2021. 

 

Research Interests/Area

Data privacy, responsible computing

 

CS Department Events

CSPAC Workshop Series

CS PhD Advisory Council is a PhD student-led organization. Our mandate is to interface between PhD students and faculty on academic issues. Reach us at cspac@u.northwestern.edu

Every Other Tuesday

Mudd 3514

AI@NU Graduate Student Group Speaker Aimee Rinehart: How AI Is Transforming Journalism

Aimee Rinehart, the Senior Product Manager AI Strategy for The Associated Press, will discuss how AI is impacting journalism and how the AP is using AI tools as part of its Local News Initiative. 

Tuesday, January 30, 2024 2:00 PM - 3:00 PM CT

Virtual via Zoom: 

More Info»
Zoom Link»

The Balance Between Model Performance and Interpretability in Demand Forecasting

Center for Deep Learning Seminar:

Jeff Tackes

Sr. Manager of Data Science, Kraft Heinz

 

Thursday, February 1st, 4:00 PM – 5:00 PM

Ford Design Center, Room 1.350 (ITW Classroom)

2133 Sheridan Road, Evanston, IL 60208

Abstract: In the rapidly evolving landscape of demand forecasting, businesses increasingly push the forecast accuracy limits by looking to new “State of the Art” models to predict market trends and consumer behavior. However, the complexity of these models often comes at the cost of interpretability, posing significant challenges for data scientists and decision-makers. This talk aims to address the critical balance between leveraging the predictive power of complex, 'black box' models and maintaining the interpretability necessary for strategic business applications.  Our discussion will begin by exploring the latest advancements in time series forecasting and discussing their strengths and limitations. We will delve into the intricacies of models that, while powerful, often function as black boxes, making it difficult to understand the 'why' behind their predictions. This opacity can hinder trust and adoption in business environments where understanding the reasoning behind forecasts is as crucial as the forecasts themselves.

To bridge this gap, we introduce SHAP (SHapley Additive exPlanations) values, a cutting-edge approach in model interpretability. SHAP values, grounded in cooperative game theory, provide a robust framework to decipher the contribution of each feature to the prediction of a complex model.

The talk will include real-world examples of how many businesses today decide on how to deploy machine learning into their business practices.

 

Bio: Jeff Tackes is Sr. Manager of Data Science for Kraft Heinz, headquartered in Chicago, IL. With over 10 years of industry experience, Jeff has developed a deep understanding of the intricacies of demand forecasting and has successfully built best-in-class demand forecasting systems for leading fortune 500 companies.

Jeff is known for his data-driven approach and innovative strategies that optimize forecasting models and improve business outcomes. He has led cross-functional teams in designing and implementing demand forecasting systems that have resulted in significant improvements in forecast accuracy, inventory optimization, and customer satisfaction. Jeff's expertise in statistical modeling, machine learning, and advanced analytics has enabled him to develop cutting-edge forecasting methodologies that have consistently outperformed industry standards. Jeff's strategic vision and ability to align demand forecasting with business goals have made him a trusted advisor to senior executives and a sought-after expert in the field.

Thursday, February 1st, 4:00 PM – 5:00 PM

Ford Design Center, Room 1.350 (ITW Classroom)

2133 Sheridan Road, Evanston, IL 60208

RSVP»

Towards Human-centered AI: How to Generate Useful Explanations for Human-AI Decision Making

The Technology & Social Behavior Ph.D. Program is excited to welcome Professor Chenhao Tan of University of Chicago to campus.

 

Professor Tan will give a talk entitled “Towards Human-centered AI: How to Generate Useful Explanations for Human-AI Decision Making” that will take place Thursday, February 8th from 4:00pm-5:00pm, with a reception to follow, in the Human-Computer Interaction + Design Center (Frances Searle Building, Room 1-122).

 

We welcome you to join!

Thursday, February 8th 2024; 4:00pm-5:00pm

Human-Computer Interaction + Design Center

(Frances Searle Building, Room 1-122)

Furthering Connections at Argonne Visit Day

On January 9, Northwestern Computer Science welcomed collaborators from Argonne National Laboratory to discuss potential research intersections.

 

Read More

Profs Trade Notes as Law Schools Write Generative AI Policies

Professor Dan Linna discussed how institutions are grappling with the new technology.

 

Read More

Northwestern CS Announces Fall 2023 Outstanding Teaching Assistant and Peer Mentors

The quarterly department awards recognize exceptional service to the CS community.

 

Read More

View all News »

Kids of color get worse health care across the board in the U.S., research finds

Nia J. Heard-Garris, MD, MBA,MSc, Assistant Professor of Pediatrics (Advanced General Pediatrics and Primary Care) and member of IPHAM's Center for Health Equity Transformation, spoke with NPR about new findings that show children from minority racial and ethnic groups received poorer health-care relative to non-Hispanic White children.

 

Read More

Cody Keenan Shares Lessons from Professional Journey to Obama White House

President Barack Obama’s second chief speechwriter, Keenan spoke at a Personal Development StudioLab lecture event and stressed to students the value of being open-minded and eager.

 

Read More

Using Liquid Crystals to Scale-up Perovskite Solar Cells

Northwestern researchers created a liquid crystal embedding coating that improved homogeneity for large perovskite films that could one day be used in solar cells.

 

Read More

© Robert R. McCormick School of Engineering and Applied Science, Northwestern University

Northwestern Department of Computer Science

Mudd Hall, 2233 Tech Drive, Third Floor, Evanston, Illinois, 60208

Unsubscribe