Upcoming Events & Important Dates
|
|
|
|
|
Important Dates & Reminders Monday, March 13, 2023: Winter Examinations Begin Saturday, March 18, 2023: Winter Examinations End/Spring Break Begins Monday, March 20, 2023: Winter grades due at 3 p.m. ------ TGS students who wish to graduate in Winter 2023 must meet the following deadlines: Friday, February 24: Deadline for TGS to receive program approval of PhD Final Exam forms via GSTS, Dissertations via ProQuest, and change of grade forms for any outstanding Y/K/X/NR grades. Friday, March 10: Deadline for TGS to receive program approval of Master’s Degree Completion forms via GSTS and change of grade forms for any outstanding Y/K/X/NR grades. For additional information about PhD and Master’s completion, please review your program handbook and The Graduate School requirements.
|
Happy Friday! We are delighted to share new 'Why Join Northwestern Computer Science?' video with you: youtu.be/IQTYpn6MUcs
|
|
|
|
|
Monday / CS Seminar February 27th / 10:00 AM Hybrid / MM Title: How efficiently can we check a computation? Speaker: Nicholas Spooner Abstract: In computer science we often ask: given a problem, how efficiently can we compute a solution? My work takes a different perspective, asking: if someone claims to have already computed a solution, how efficiently can we check it’s correct? This question has deep connections with many areas of theoretical computer science, including cryptography, complexity theory and quantum computing; and, more recently, has had significant impact in practice. In this talk I will focus on two aspects of my work in this area: first, on designing concretely efficient checking protocols; and second, on ensuring the integrity of efficient checking against quantum attackers. Biography: Nicholas Spooner is an assistant professor at the University of Warwick, UK, which he joined in January 2021. Before that, he spent a year and a half as a postdoc at Boston University. He received his PhD from UC Berkeley in 2020. His interests lie within the union of cryptography, quantum computing, and proof systems. //
|
Monday / CS Seminar February 27th / 12:00 PM Mudd 3514 Title: Algorithms and Systems for Efficient Machine Learning Speaker: Tri Dao (Faculty Candidate) Abstract: Machine learning (ML) models training will continue to grow to consume more cycles, their inference will proliferate on more kinds of devices, and their capabilities will be used on more domains. Some goals central to this future are to make ML models efficient so they remain practical to train and deploy, and to unlock new application domains with new capabilities. We describe some recent developments in hardware-aware algorithms to improve the efficiency-quality tradeoff of ML models and equip them with long context. In the first half, we focus on structured sparsity, a natural approach to mitigate the extensive compute and memory cost of large ML models. We describe a line of work on learnable fast transforms which, thanks to their expressiveness and efficiency, yields some of the first sparse training methods to speed up large models in wall-clock time (2x) without compromising their quality. In the second half, we focus on efficient Transformer training and inference for long sequences. We describe FlashAttention, a fast and memory-efficient algorithm to compute attention with no approximation. By careful accounting of reads/writes between different levels of memory hierarchy, FlashAttention is 2-4x faster and uses 10-20x less memory compared to the best existing attention implementations, allowing us to train higher-quality Transformers with 8x longer context. FlashAttention is now widely used in some of the largest research labs and companies, in just 6 months after its release. We conclude with some exciting directions in ML and systems, such as software-hardware co-design, structured sparsity for scientific AI, and long context for new AI workflows and modalities. Biography: Tri Dao is a PhD student in Computer Science at Stanford, co-advised by Christopher Ré and Stefano Ermon. He works at the interface of machine learning and systems, and his research interests include sequence models with long-range memory and structured matrices for compact deep learning models. His work has received the ICML 2022 Outstanding paper runner-up award //
|
Monday / CS Seminar March 6th / 12:00 PM Mudd 3514 Title: Guidance Helps Where Scale Doesn't In Language Modeling Speaker: Ofir Press Abstract: Language models (LMs) are at the core of almost all state of the art natural language processing systems on almost every benchmark. Recent papers, such as Brown et al. 2020 and Hoffmann et al. 2022 have shown that scaling up the size of these models leads to better results. But is scaling all we need in order to improve language models? In this talk I argue that the answer is no, by presenting three studies that show properties of LMs that are not improved with scale. In addition, I will show how to tackle these issues without actually increasing the size on disk, memory usage, or runtime of the LM. In each case, I accomplish it by adding a new kind of guidance to the model. In Press & Wolf 2017 we showed that the decoding mechanism in LMs contains word representations, and that in models of different sizes, the decoder word representations are of lower quality than the ones in the encoder. We then show that by using the same representations twice (in both the encoder and the decoder) we improve LM performance while decreasing its size. Memory constraints imply that LMs have to be trained on limited segments of text. For example, GPT-3 (Brown et al. 2020) was trained on text segments that are 4,096 tokens long. Can these models summarize text sequences that are longer than the ones they observed at training? Can they make code predictions for code files that are longer than the ones they were shown during training? In Press et al. 2021 we show that existing LMs cannot process text segments that are longer than the ones they were trained on. We present a new method (ALiBi) that allows LMs to efficiently consume sequences that are longer than the ones they observed at training. ALiBi achieves this by guiding the LM to pay less attention to words that are further away. Finally, in Press et al. 2022 we show that LMs are able to reason over facts observed during training to answer novel questions that they have never previously seen. But in about 40% of cases, they are not able to accomplish basic reasoning over facts that they are able to recall, and this does not improve with scale. We show that by adding guidance to the way we prompt LMs, by having them ask and answer sub-questions before answering the main complex question, we are able to substantially improve their reasoning capabilities. These methods have been integrated in many state-of-the-art language and translation models, including OpenAI's GPT, Google's BERT, BigScience's BLOOM and Microsoft's, Meta's and Amazon's translation models. Biography: Ofir Press is a PhD candidate (ABD) at the Paul G. Allen School for Computer Science & Engineering at the University of Washington, where he is advised by Noah Smith. During his PhD he spent two years as a visiting researcher at Facebook AI Research Labs on Luke Zettlemoyer’s team where he mainly worked with Mike Lewis. Prior to that, in the summer of 2019 he interned at Facebook AI Research with Omer Levy. Towards the end of my PhD he spent half a year as a visiting researcher at MosaicML on Jonathan Frankle’s team. Before starting his PhD he completed his Bachelor’s and Master’s degrees in Computer Science at Tel Aviv University (where he was advised by Lior Wolf and also worked with Jonathan Berant). Between his Bachelor’s and Master’s degrees he was a software developer for a year. //
|
|
|
|
March 2nd | 3:30PM Zoom: 986 0612 4124 In his talk, Gennaro Rodrigues will be sharing his perspective of how data will help shape the future through the solution of complex problems, with cases and examples.
|
|
|
|
|
|
|
March 10th Mudd 3rd Floor PhD Visit Day will be held March 10th. Look out for emails with more details.
|
|
|
|
|
|
|
Li Lab Postdoc Opening at Stanford AI & Medicine April 15th - 16th The Li Lab at Stanford Medicine has an opening for a postdoc scholar who’s interested in AI and medicine (in particular, medical imaging, computational pathology, and precision medicine). Prior experiences with medical image analysis, computational pathology, or bioinformatics are desirable. For more information please visit med.stanford.edu/lilab.html
|
|
|
|
|
|
|
March 1st | 6:00 PM Kresge Hall 2425 It’s hard to ignore the growing conversation surrounding cryptocurrencies like Bitcoin, NFTs, and blockchain. Crypto enthusiasts promote these digital technologies as a replacement for banks, a lucrative investment, and a new way to buy art. While crypto technologies may seem confusing or risky, concerns around crypto’s environmental impact often fly under the radar. In reality, mining and maintaining cryptocurrencies causes more global CO2 emissions than Switzerland, Croatia, and Norway combined. Whose interests are best served by cryptocurrency, and who has the greatest risk of being harmed? How might we regulate or reconstruct crypto technologies to limit their environmental harm? Why is crypto worth keeping around? Come to Kresge Hall 2425 on Wednesday, 6PM to discuss! Location: Kresge Hall 2425 Date + Time: Wednesday, March 1st, 6PM Food: Popcorn
|
|
|
|
|
|
|
Code'n'Color We Tell These Stories To Survive: Towards Abolition in Computer Science Education Feb 28 | 3PM Zoom: 933 3767 0766 Stephanie Jones and Natalie Araujo Melo will discuss their paper on anti-Blackness in computer Science. Hosted by the student organization Code'n'Color
|
|
|
|
|
|
|
WildHacks April 15th - 16th Registration for WildHacks 2023 is now open until February 28th @ 11:59pm! Registration is limited, so register on our website as soon as possible! WildHacks is Northwestern University's 36-hour in-person hackathon taking place from Saturday, April 15th to Sunday, April 16th, 2023! Students of any skill level, major, school, and background are welcome. If you’re a beginner to programming, we’ll have workshops on GitHub, Software Development, and more! WildHacks is 100% FREE to participate -- register now to claim your spot for free food, fun social & destress events, swag, and chances to win prizes with your best ideas! Time: 11am on Saturday, April 15th, 2023 to 5pm on Sunday, April 16th, 2023. The full schedule will be released closer to the event. Location: Northwestern University’s Mudd Library: 2233 Tech Dr, Evanston, IL 60208 Check out our website wildhacks.net for more info about our event including finding/signing up for teams, sleeping accommodations for non-Northwestern students, registration policies, and other logistics. Don’t forget to follow us on Facebook and Instagram to stay updated!
|
|
|
|
|
|
|
This year’s BEST symposium will be held in person in Midland, MI on July 24th – 27th , 2023. The symposium is primarily intended to introduce Black, Latinx, and Native American U.S. doctoral and postdoctoral scientists to the wide range of rewarding careers in industrial research, and in particular, the many opportunities with one of the world’s largest and leading materials science companies, Dow. This conference, developed jointly by our minority scientists and Ph.D. recruiting team, demonstrates our commitment to a diverse work force. This opportunity is for graduate students and post-doctoral scientists. Applicants must be pursuing degrees in: Chemistry, Chemical Engineering, Mechanical Engineering, Materials Science, Physics, or other closely related fields and should have a doctorate degree or expect to receive one by December 2024. Additional information may be found at their website. All applications are due by April 30th, 2023. Participants in the conference may be considered for future employment at Dow. However, participation neither obligates the student to apply for employment, nor guarantees future consideration for employment by Dow. For those wishing to learn more about opportunities, please visit our careers page. For further information, please contact the symposium chair Karena Lekich.
|
|
|
|
The Deepfake Dangers Ahead
|
Professor V.S. Subrahmanian and two colleagues wrote that AI-generated disinformation, especially from hostile foreign powers, is a growing threat to democracies based on the free flow of ideas. Read More
|
|
|
|
|
|
Funding New Research to Operationalize Safety in Artificial Intelligence
|
The Center for Advancing Safety of Machine Intelligence (CASMI) will support eight new projects led by teams at Northwestern and partner institutions. Read More
|
|
|
|
|
|
Christos Dimoulas Receives Prestigious NSF CAREER Award
|
Dimoulas aims to develop a new empirical technique for evaluating programming languages pragmatics, or whether a programming language feature helps or hinders software developers in the context of a work task. Read More
|
|
|
|
|
|
Professor Emeritus Roger Schank Passes Away
|
Schank — a foundational pioneer in the fields of artificial intelligence, cognitive science, and learning sciences — passed away on January 29 at age 76. Read More
|
|
|
|
|
|
© Robert R. McCormick School of Engineering and Applied Science, Northwestern University
|
|
|
|
Northwestern Department of Computer Science Mudd Hall, 2233 Tech Drive, Third Floor, Evanston, Illinois, 60208 Unsubscribe
|
|
|
|
|
|