Upcoming Events & Important Dates
|
|
|
|
|
Important Dates & Reminders Friday, February 10, 2023: Last day to drop a class for Winter via CAESAR without a W Monday, February 20, 2023: Registration for Spring Quarter begins Monday, March 13, 2023: Winter Examinations Begin Saturday, March 18, 2023: Winter Examinations End/Spring Break Begins Monday, March 20, 2023: Winter grades due at 3 p.m. ------ TGS students who wish to graduate in Winter 2023 must meet the following deadlines: Friday, February 24: Deadline for TGS to receive program approval of PhD Final Exam forms via GSTS, Dissertations via ProQuest, and change of grade forms for any outstanding Y/K/X/NR grades. Friday, March 10: Deadline for TGS to receive program approval of Master’s Degree Completion forms via GSTS and change of grade forms for any outstanding Y/K/X/NR grades. For additional information about PhD and Master’s completion, please review your program handbook and The Graduate School requirements.
|
TwitterInstagram: Coming Please send any upcoming news and events to news@cs.northwestern.edu to be included in future bulletins. Events must be sent by Thursday 12PM to be featured in that week's bulletin, events received afterwards will be included at our discretion.
|
|
|
|
|
Monday / CS Seminar February 13th / 10:00 AM Mudd 3514 Title: Distance-Estimation in Modern Graphs: Algorithms and Impossibility Speaker: Nicole Wein Abstract: The size and complexity of today's graphs present challenges that necessitate the discovery of new algorithms. One central area of research in this endeavor is computing and estimating distances in graphs. In this talk I will discuss two fundamental families of distance problems in the context of modern graphs: Diameter/Radius/Eccentricities and Hopsets/Shortcut Sets. The best known algorithm for computing the diameter (largest distance) of a graph is the naive algorithm of computing all-pairs shortest paths and returning the largest distance. Unfortunately, this can be prohibitively slow for massive graphs. Thus, it is important to understand how fast and how accurately the diameter of a graph can be approximated. I will present tight bounds for this problem via conditional lower bounds from fine-grained complexity. Secondly, for a number of settings relevant to modern graphs (e.g. parallel algorithms, streaming algorithms, dynamic algorithms), distance computation is more efficient when the input graph has low hop-diameter. Thus, a useful preprocessing step is to add a set of edges (a hopset) to the graph that reduces the hop-diameter of the graph, while preserving important distance information. I will present progress on upper and lower bounds for hopsets. Biography: Nicole Wein is a Simons Postdoctoral Leader at DIMACS at Rutgers University. Previously, she obtained her Ph.D. from MIT advised by Virginia Vassilevska Williams. She is a theoretical computer scientist and her research interests include graph algorithms and lower bounds including in the areas of distance-estimation algorithms, dynamic algorithms, and fine-grained complexity. //
|
Monday / CS Seminar February 13th / 12:00 PM Mudd 3514 Title: Reading to Learn: Improving Generalization by Learning From Language Speaker: Victor Zhong Abstract: Traditional machine learning systems are trained on vast quantities of annotated data or experience. These systems often do not generalize to new, related problems that emerge after training, such as conversing about new topics or interacting with new environments. In this talk, I present Reading to Learn, a new class of algorithms that improve generalization by learning to read language specifications, without requiring any actual experience or labeled examples. This includes, for example, reading FAQ documents to learn to answer questions about new topics and reading manuals to learn to play new games. I will discuss new algorithms and data for Reading to Learn applied to a broad range of tasks, including policy learning in grounded environments and data synthesis for code generation, while also highlighting open challenges for this line of work. Ultimately, the goal of Reading to Learn is to democratize AI by making it accessible for low-resource problems where the practitioner cannot obtain annotated data at scale, but can instead write language specifications that models read to generalize. Biography: Victor Zhong is a PhD student at the University of Washington Natural Language Processing group. His research is at the intersection of natural language processing and machine learning, with an emphasis on how to use language understanding to learn more generally and more efficiently. His research covers a range of topics, including dialogue, code generation, question answering, and grounded reinforcement learning. Victor has been awarded the Apple AI/ML Fellowship as well as an EMNLP Outstanding Paper award. His work has been featured in Wired, MIT Technology Review, TechCrunch, VentureBeat, Fast Company, and Quanta Magazine. He was a founding member of Salesforce Research, and has previously worked at Meta AI Research and Google Brain. He obtained a Masters in Computer Science from Stanford University and a Bachelor of Applied Science in Computer Engineering from the University of Toronto. //
|
Wednesday / CS Seminar February 15th / 10:00 AM Mudd 3514 Title: Computational Imaging for Enabling Vision Beyond Human Perception Speaker: Mark Sheinin Abstract: From minute surface vibrations to very fast-occurring events, the world is rich with phenomena humans cannot perceive. Likewise, most computer vision systems are primarily based on 'conventional' cameras, which were designed to mimic the imaging principle of the human eye, and therefore are equally blind to these ubiquitous phenomena. In this talk, I will show that we can capture these hidden phenomena by creatively building novel vision systems composed of common off-the-shelf components (i.e., cameras and optics) coupled with cutting-edge algorithms. Specifically, I will cover three projects using computational imaging to sense hidden phenomena. First, I will describe the ACam - a camera designed to capture the minute flicker of electric lights ubiquitous in our modern environments. I will show that bulb flicker is a powerful visual cue that enables various applications like scene light source unmixing, reflection separation, and remote analyses of the electric grid itself. Second, I will describe Diffraction Line Imaging, a novel imaging principle that exploits diffractive optics to capture sparse 2D scenes with 1D (line) sensors. The method's applications include capturing fast motions (e.g., actors and particles within a fast-flowing liquid) and structured light 3D scanning with line illumination and line sensing. Lastly, I will present a new approach for sensing minute high-frequency surface vibrations (up to 63kHz) for multiple scene sources simultaneously, using "slow" sensors rated for only 130Hz. Applications include capturing vibration caused by audio sources (e.g., speakers, human voice, and musical instruments) and localizing vibration sources (e.g., the position of a knock on the door). Biography: Mark Sheinin is a Post-doctoral Research Associate at Carnegie Mellon University's Robotic Institute at the Illumination and Imaging Laboratory. He received his Ph.D. in Electrical Engineering from the Technion - Israel Institute of Technology in 2019. His work has received the Best Student Paper Award at CVPR 2017 and the Best Paper Honorable Mention Award at CVPR 2022. He received the Porat Award for Outstanding Graduate Students, the Jacobs-Qualcomm Fellowship in 2017, and the Jacobs Distinguished Publication Award in 2018. His research interests include computational photography and computer vision. //
|
Wednesday / CS Seminar February 15th / 12:00 PM Mudd 3514 Title: Democratizing Large-Scale AI Model Training via Heterogeneous Memory Speaker: Dong Li Abstract: The size of large artificial intelligence (AI) models increases by about 200x in the past three years. To train those models with billion- or even trillion-scale parameters, memory capacity becomes a major bottleneck, which leads to a range of functional and performance issues. The memory capacity problem becomes even worse with growth of batch size, data modality, and training pipeline size and complexity. Recent advance of heterogeneous memory (HM) provides a cost-effective approach to increase memory capacity. Using CPU memory as an extension to GPU memory, we can build an HM to enable large-scale AI model training without using extra GPUs to accommodate large memory consumption. However, not only HM imposes challenges on tensor allocation and migration on HM itself, but it is also unclear how HM affects training throughput. AI model training possesses unique characteristics of memory access patterns and data structures, which places challenges on the promptness of data migration, load balancing, and tensor redundancy on GPU. In this talk, we present our recent work on using HM to enable large-scale AI model training. We identify the major memory capacity bottleneck in tensors, and minimize GPU memory usage through co-offloading of computing and tensors from GPU. We also use analytical performance modeling to guide tensor migration between memory components in HM, in order to minimize migration volume and reduce load imbalance between batches. We show that using HM we can train industry-quality transformer models with over 13 billion parameters on a single GPU, a 10x increase in size compared to popular frameworks such as PyTorch, and we do so without requiring any model change from data scientists or sacrificing computational efficiency. Our work has been integrated into Microsoft DeepSpeed and employed in industry to democratize large-scale AI models. We also show that using HM we enable large-scale GNN training with billion-scale graphs without losing accuracy and suffering from out of memory (OOM). Biography: Dong Li is an associate professor at EECS, University of California, Merced. Previously, he was a research scientist at the Oak Ridge National Laboratory (ORNL). Dong earned his PhD in computer science from Virginia Tech. His research focuses on high performance computing (HPC), and maintains a strong relevance to computer systems. The core theme of his research is to study how to enable scalable and efficient execution of enterprise and scientific applications (including large-scale AI models) on increasingly complex parallel systems. Dong received an ORNL/CSMD Distinguished Contributor Award in 2013, a CAREER Award from the National Science Foundation in 2016, a Berkeley Lab University Faculty Fellowship in 2016, a Facebook research award in 2021, and an Oracle research award in 2022. His paper in SC'14 was nominated as the best student paper. His paper in ASPLOS'21 won the distinguished artifact award. He was also the lead PI for the NVIDIA CUDA Research Center at UC Merced. He is an associate editor for IEEE Transactions on Parallel and Distributed Systems (TPDS).
|
|
|
|
Tech Talk with Noah Levine Feb 20th | 4PM Zoom Noah Levine ('98) is the current Vice President, Advanced Advertising at Warner Bros. Discovery
|
|
|
|
|
|
|
Kiki & Conversation with Marquis Bey Feb 22nd | 3PM Mudd 3514
Come chat with Assistant Professor of African American Studies Marquis Bey in a conversation about Theorizing Blackness.
Free snacks and refreshments provided.
|
|
|
|
|
|
|
Bagel Friday Feb 24th | 9:30AM Mudd 3514 Come enjoy free bagels and coffee in Mudd 3514
|
|
|
|
|
|
|
March 10th Mudd PhD Visit Day will be held March 10th. Details to come.
|
|
|
|
|
|
|
Modeling and Optimization for Population Health Screening Policies Dr. Qiushi Chen, Penn State University Monday, February 13th, 11 AM – 12 PM CST Abstract: Population health screening can be an effective approach for identifying diseases at early stages to enable timely treatment and to improve long-term health outcomes. The rationale is simple, but how to develop efficient screening policies at the population level while considering limited resources is nontrivial. This talk will discuss several studies we have conducted in modeling and optimizing screening policies in a variety of clinical contexts across infectious diseases, chronic diseases, and developmental disabilities. We will discuss the policy insights for screening—either common or unique to these clinical contexts—that are drawn from our modeling analyses and future research opportunities in the emerging era of big healthcare data. RSVP for this virtual seminar
|
|
|
|
|
|
|
Feb 22nd | 6:15-7:15 PM The Garage VentureCat, Northwestern’s annual student startup competition, will be held on Wednesday, May 31, 2023 where Northwestern’s most promising student-founded startups will compete for over $300,000+ in non-dilutive prize money. Interested in learning how to apply and compete? Register to attend our info session event on February 9 at 5:15 PM at The Garage. RSVP Here
|
|
|
|
|
|
|
WildHacks April 15th - 16th We would like to invite you to Northwestern WildHacks 2023 from Saturday to Sunday, April 15-16, 2023. WildHacks is a 36-hour hackathon that is 100% free to attend. All levels of experience, backgrounds, and majors are welcome, and no prior programming experience is needed! If you’re interested in participating in WildHacks 2023, fill out this interest form to stay up to date with info about the event, including when the official registration drops in early February!
Check out wildhacksnu.com for more details or reach out to wildhacks@northwestern.edu with any questions!
|
|
|
|
|
|
|
|
|
|
Defining Safety in Artificial Intelligence: ‘We Need to Have a Community’
|
Interdisciplinary thought leaders attended the Center for Advancing Safety of Machine Intelligence (CASMI) “Toward a Safety Science of AI" workshop last month to share ideas on how we can define, measure, and anticipate safety in artificial intelligence. Read More
|
|
|
|
|
|
Northwestern CS Launches New Research Track
|
The new research track is designed to provide second-year students majoring in computer science with a structured and mentored introduction to the research process. Read More
|
|
|
|
|
|
Showcasing Early-Career Researchers in Theoretical Computer Science
|
The Northwestern CS Theory Group and Toyota Technological Institute at Chicago co-hosted the Junior Theorists Workshop on January 5-6. Read More
|
|
|
|
|
|
Advancing Security and Privacy Education
|
A Q&A with Northwestern Computer Science assistant professor of instruction Sruti Bhagavatula. Read More
|
|
|
|
|
|
© Robert R. McCormick School of Engineering and Applied Science, Northwestern University
|
|
|
|
Northwestern Department of Computer Science Mudd Hall, 2233 Tech Drive, Third Floor, Evanston, Illinois, 60208 Unsubscribe
|
|
|
|
|
|