Computer Science Colloquia & Seminars are held each semester and sponsored by the Computer Science department. Faculty invite speakers from all areas of computer science, and the talks are open to all members of the RPI community. 

 

 

2024

Apr
23
2024
Human-ML Collaboration and the Role of Explainable ML

Machine Learning (ML) systems that inform real-world decisions are typically parts of larger sociotechnical systems, involve multiple human stakeholders, and rely on human-ML collaboration at different stages of the development and deployment pipeline. Given that ML systems increasingly inform decisions of consequence (e.g., loan decisions, criminal justice decisions), human decision-makers effectively interacting with the ML model is critical. As such, “explainability” has become a highly desired feature in ML models we deploy in the real world. In this talk, we will overview how the explainability of ML models fits into sociotechnical systems and discuss the popular explainable ML methods, limitations of existing work, and open research questions.

Bio. Kasun Amarasinghe is a Senior Research Scientist at the Machine Learning Department of Carnegie Mellon University (CMU) in the Data Science for Public Policy Lab. Kasun studies human-ML collaborative decision making systems in the public sector and how to develop responsible ML systems for such contexts. Before this role, Kasun was a Postdoctoral Research Associate at CMU. Kasun received his Ph.D. in Computer Science from Virginia Commonwealth University (Thesis: Explainable Deep Neural Networks for Cyber-Physical Systems) and his BSc. in Computer Science from University of Peradeniya, Sri Lanka

Kasun Amarasinghe , Carnegie Mellon University
CII 3206 2:00 pm

Apr
17
2024
Signal recovery in the high-noise, high-dimensional regime

This talk will describe recent work on mathematical methods for signal recovery in high noise. The first part of the talk will explain the connection between the Wiener filter, singular value shrinkage, and Stein's method for covariance estimation, and review optimal shrinkage in the spiked covariance model. We will then present extensions to heteroscedastic noise and linearly-corrupted observations. Time permitting, we will also give an overview of the related class of orbit recovery problems.

William Leeb is an Assistant Professor in the School of Mathematics at the University of Minnesota, Twin Cities. He earned a B.S. in Mathematics from the University of Chicago in 2010, a Ph.D in Mathematics from Yale University in 2015, and was a postdoc in the Program in Applied and Computational Mathematics at Princeton University until 2018, when he joined Minnesota. His research interests are in
applied and computational harmonic analysis, statistical signal processing, and machine learning.

William Leeb, University of Minnesota, Twin Cities
CII 3206 2:00 pm

Apr
10
2024
Data-Driven Decision Making in Adversarial Environments: Challenges and Applications

Many real-world problems require the creation of robust AI models that include both learning and planning for an agent (or a team of agents) in interaction with adversaries in a multi-agent environment. In such a complex setting, it is important to predict strategic behavior of the adversaries, as well as to anticipate potential adversarial manipulations that could deteriorate the learning outcomes and the decision quality of our agents. In this talk, I will discuss the challenges of modeling adversaries’ decision making and the security of machine learning in data-driven multi-agent competitive environments. I will present our algorithms to address these challenges that explore techniques in reinforcement learning, game theory, and optimization research. In addition, I will introduce some of the real-world applications of our algorithms in the domains of wildlife protection and public health.

 

Bio: Thanh Nguyen is an Assistant Professor in the Computer Science department at the University of Oregon (UO). Prior to UO, she was a postdoc at the University of Michigan and earned her PhD in Computer Science from the University of Southern California. Thanh’s work in the field of Artificial Intelligence is motivated by real-world societal problems, particularly in the areas of Public Safety and Security, Conservation, and Public Health. She brings together techniques from multi-agent systems, reinforcement learning, and game theory to solve problems in those areas, with the focus on studying adversary behavioral learning and deception in competitive multi-agent environments. Thanh’s work has been recognized by multiple awards, including the IAAI-16 Deployed Application Award, and the AAMAS-16 Runner-up of the Best Innovative Application Paper Award. Her works in wildlife protection and public health were evaluated and/or deployed in multiple countries around the world.

Thanh Nguyen , University of Oregon
CII 3206 2:00 pm

Apr
4
2024
Towards Behavior-Informed Machine Learning

Machine learning (ML) has seamlessly integrated into various facets of humans' everyday lives, largely drawing from human data for its training. Consequently, these ML systems frequently exhibit and reflect human behavioral biases, leading to concerns across a variety of applications. In this presentation, I will discuss my recent efforts to develop behavior-informed machine learning which considers and incorporates human behavior's impacts into ML system design. Specifically, my focus will be on two crucial aspects of human behavior in the ML lifecycle: the generation of data used for training machine learning models, and human decision-making processes that occur in conjunction with machine assistance. The goal of my work is to develop ML systems that are robust to behavioral training data and capable of augmenting and enhancing human decision-making capabilities.

Bio: Chien-Ju is an assistant professor in Computer Science & Engineering at Washington University in St. Louis. Previously, he was a postdoctoral associate at Cornell University. He earned his PhD in Computer Science from the University of California, Los Angeles in 2015 and spent three years visiting the EconCS group at Harvard from 2012 to 2015. He is the recipient of the Google Outstanding Graduate Research Award at UCLA in 2015. His work was nominated for Best Paper Award at WWW 2015 and HCOMP 2021. His research broadly connects to the fields of machine learning, optimization, behavioral sciences, and algorithmic economics. He is interested in investigating the interactions between humans and AI, including enabling AI algorithms to learn from humans (e.g., in the context of crowdsourcing) and designing AI algorithms to assist human decision-making (e.g., through information design and environment design).

Chien-Ju Ho, Washington University
CII 3206 2:00 pm

Mar
20
2024
Intelligent Software in the Era of Deep Learning

With the end of Moore's Law and the rise of compute- and data-intensive deep-learning (DL) applications, the focus on arduous new processor design has shifted towards a more effective and agile approach -- Intelligent Software to maximize the performance gains of DL hardware like GPUs.

In this talk, I will first highlight the importance of software innovation to bridge the gap between the increasingly diverse DL applications and the existing powerful DL hardware platforms. The second part of my talk will recap my research work on DL system software innovation, focusing on bridging the 1) Precision Mismatch between DL applications and high-performance GPU units like Tensor Cores (PPoPP '21 and SC '21), and 2) Computing Pattern Mismatch between the sparse and irregular DL applications such as Graph Neural Networks and the dense and regular tailored GPU computing paradigm (OSDI '21 and OSDI '23). Finally, I will conclude this talk with my vision and future work for building efficient, scalable, and secure DL systems.

Bio: Yuke Wang is a final-year Doctor of Philosophy (Ph.D.) candidate in the Department of computer science at the University of California, Santa Barbara (UCSB). He got his Bachelor of Engineering (B.E.) in software engineering from the University of Electronic Science and Technology of China (UESTC) in 2018. At UCSB, Yuke is working with Prof.Yufei Ding (Now at UC at San Diego, CSE). Yuke's research interests include Systems & Compiler for Deep Learning and GPU-based High-performance Computing. His projects cover graph neural network (GNN) optimization and its acceleration on GPUs. Yuke’s research has resulted in 20+ publications (with 10 first-authored papers) in top-tier conferences, including OSDI, ASPLOS, ISCA, USENIX ATC, PPoPP, and SC. Yuke’s research outcome has been adopted for further research in industries (e.g., NVIDIA, OctoML, and Alibaba) and academia (e.g., University of Washington and Pacific Northwest National Laboratory). Yuke is also the recipient of the NVIDIA Graduate Fellowship 2022 (Top-10 out of global applicants) and has industry experience at Microsoft Research, NVIDIA Research, and Alibaba. The ultimate goal of Yuke’s research is to facilitate efficientscalable, and secure deep learning in the future.  https://www.wang-yuke.com/

Yuke Wang, University of California, Santa Barbara
CII 3206 12:00 pm

Mar
11
2024
Program Analysis: A Journey through Traditional Methods, Emerging Data-Driven Approaches, and Machine Learning Applications

Program analysis, the process of analyzing source code to derive its properties, has been a prominent research area for decades. Effective program analysis methods have played a pivotal role in ensuring program correctness and optimizing performance. In this talk, I will walk you through a journey centered around program analysis, in particular, it starts with classical symbolic-, logic-based analysis/testing techniques, ventures into the realm of emerging data-driven approaches, and concludes with their applications in machine learning models. I will not only share the results of my research in compiler optimization, bug detection, and model hardening, but more importantly, I will discuss my research vision and plan for building the next-generation programming environment.

Bio: Ke Wang is currently a visiting scholar at Stanford University (on leave from his duty as a research scientist at Visa Research). His primary research interests span programming language, program analysis, and machine learning. His work has been featured in premier research conferences in programming language, machine learning and artificial intelligence, including PLDI, OOPSLA, NeurIPs, ICLR, and IJCAI. Notably, Dr. Wang's work received a Distinguished Paper Award at OOPSLA 2020 and an Oral presentation at NeurIPs 2022. He has served on the program committee for PLDI in 2020, 2021 and 2023. Prior to joining Visa Research, Dr. Wang obtained his PhD from UC Davis, where he was awarded twice in 2015 and 2018 an Honorable Mention for Outstanding Graduate Research in the Computer Science Department. He also worked at Microsoft Research, Siemens Corporate Technology/Research and Meta.

Ke Wang, Stanford University
CII 3206 12:00 pm

Feb
28
2024
Quantum Computing Now: A Tensor Approach

Quantum algorithms claim to outperform classical algorithms by harnessing computational and communication properties unique to quantum systems, such as superposition and entanglement, e.g., Shor's factoring algorithm and Grover's algorithm. Do quantum algorithms live up to the claims?

Google's quantum supremacy announcement in 2019 has received broad questions from academia and industry due to the debatable estimate of 10,000 years' running time for the classical simulation task on the Summit supercomputer. Has “quantum supremacy" already come? Or will it come in one decade later?   We take a reinforcement learning approach for the classical simulation of quantum circuits and demonstrate its great potential by reporting an estimated simulation time of less than 4 days, a speedup of 5.40x over the state-of-the-art method.  Specifically, we utilize a tensor network approach and employ a deep reinforcement learning algorithm.   

Bio: Xiao-Yang (Yanglet) Liu joined RPI's CS department as a lecturer in September 2023. He holds Ph.D. and M.S. degrees in the Department of Electrical Engineering at Columbia University in 2023 and 2018, respectively. His research interests include quantum computing and tensor networks, deep reinforcement learning, and model-openness framework (licenses) in AI. Xiao-Yang has authored chapters to two graduate textbooks:  tensors for data processing, and reinforcement learning for cyber-physical systems. His papers got over 4400 citations, three of which are ESI-highly cited papers. He received NeurIPS scholar award 2022/2023 and ICAIF-JPM award 2022/2023. He is an academic member of Linux Foundation, LF AI & Data, FinOS, and CRAFT, collaborating on open-source projects FinGPT and FinRL.  As a (senior) PC member, he serves leading AI conferences such as NeurIPS, ICML, ICLR, AAAI, and ACM ICAIF. Xiao-Yang has chaired sessions at IJCAI 2019 and has been a leader organizer of multiple workshops and academic competitions, including NeurIPS 2020/2021 First/Second Workshop on Quantum Tensor Networks in Machine Learning (QTNML), ACM ICAIF FinRL competition 2023, and IJCAI 2020 Workshop on Tensor Networks Representations in Machine Learning.

Xiao-Yang (Yanglet) Liu , Rensselaer Polytechnic Institute
CII 3206 12:00 pm

Feb
26
2024
Bridging the Gap Between Theory and Practice: Solving Intractable Problems in a Multi-Agent Machine Learning World

Traditional computing sciences have made significant advances with tools like Complexity and Worst-Case Analysis. However, Machine Learning has unveiled optimization challenges, from image generation to autonomous vehicles, that surpass the analytical capabilities of past decades. Despite their theoretical complexity, such tasks often become more manageable in practice, thanks to deceptively simple yet efficient techniques like Local Search and Gradient Descent.

In this talk, we will delve into the effectiveness of these algorithms in complex environments and discuss developing a theory that transcends traditional analysis by bridging theoretical principles with practical applications. We will also explore the behavior of these heuristics in multi-agent strategic environments, evaluating their ability to achieve equilibria using advanced tools from Optimization, Statistics, Dynamical Systems, and Game Theory. The discussion will conclude with an outline of future research directions and my vision for a computational understanding of multi-agent Machine Learning.

Bio: Emmanouil-Vasileios (Manolis) Vlatakis Gkaragkounis is currently a Foundations of Data Science Institute (FODSI) Postdoctoral Fellow at the Simons Institute for the Theory of Computing, UC Berkeley, mentored by Prof. Michael Jordan. He completed his Ph.D. in Computer Science at Columbia University, under the guidance of Professors Mihalis Yannakakis and Rocco Servedio, and holds B.Sc. and M.Sc. degrees in Electrical and Computer Engineering. Manolis specializes in the theoretical aspects of Data Science, Machine Learning, and Game Theory, with expertise in beyond worst-case analysis, optimization, and data-driven decision-making in complex environments. His work has applications across multiple areas, including privacy, neural networks, economics and contract theory, statistical inference, and quantum machine learning.

Emmanouil-Vasileios (Manolis) Vlatakis Gkaragkounis , UC Berkeley
CII 3206 12:00 pm

Feb
21
2024
Statistical-Computational Tradeoffs in Random Optimization Problems

Optimization problems with random objective functions are central in computer science, probability, and modern data science. Despite their ubiquity, finding efficient algorithms for solving these problems remains a major challenge. Interestingly, many random optimization problems share a common feature, dubbed as a statistical-computational gap: while the optimal value can be pinpointed non-constructively (through, e.g., probabilistic/information-theoretic tools), all known polynomial-time algorithms find strictly sub-optimal solutions. That is, an optimal solution can only be found through brute force search which is computationally expensive. 

In this talk, I will discuss an emerging theoretical framework for understanding the fundamental computational limits of random optimization problems, based on the Overlap Gap Property (OGP). This is an intricate geometrical property that achieves sharp algorithmic lower bounds against the best known polynomial-time algorithms for a wide range of random optimization problems. I will focus on two models to demonstrate the power of the OGP framework: (a) the symmetric binary perceptron, a random constraint satisfaction problem and a simple neural network classifying/storing random patterns, widely studied in computer science, probability, and statistics communities, and (b) the random number partitioning problem as well as its planted counterpart, a classical worst-case NP-hard problem whose average-case variant is closely related to the design of randomized controlled trials. In addition to yielding sharp algorithmic lower bounds, our techniques also give rise to new toolkits for the study of statistical-computational tradeoffs in other models, including the online setting.

Bio. Eren C. Kizildag is a Distinguished Postdoctoral Fellow at Columbia University, Department of Statistics. He received his PhD in Electrical Engineering and Computer Science from MIT in 2022, supervised by David Gamarnik. His research interests are at the intersection of computer science with probability, statistics, and data science. He is particularly interested in understanding statistical-computational tradeoffs in random computational problems and large-scale random models, as well as in mathematical foundations of machine learning and data science. 

Eren Kizildag, Columbia University
CII 3206 12:00 pm

Feb
16
2024
Security of Quantum Computing Systems

Quantum computer device research continues to advance rapidly to improve size and fidelity of the quantum computers. In parallel, there is an increasing number of deployments of existing quantum computing systems which are being made available for use by researchers and general public through cloud-based services. In particular, more and more of the quantum computer systems are becoming available as cloud-based services thanks to IBM Quantum, Amazon Braket, Microsoft Azure, and other cloud providers. Ease of access makes these computers accessible to almost anybody and can help advance developments in algorithms, quantum programs, compilers, etc. However, open, cloud-based access may make these systems vulnerable to novel security threats that could affect operation of the quantum computers, or users using these devices. Further, as with any cloud-based computing system, users do not have physical control of the remote devices. Untrusted cloud providers, or malicious insiders within otherwise trusted cloud provider, also pose novel security threats. User’s programs could be stolen or manipulated, or output data could be leaked out. The goal of this seminar will be to introduce audience to recent research on security of quantum computing systems. During the seminar novel security attacks on quantum computing systems will be discussed, as well as corresponding defenses. The focus of the seminar will be on superconducting qubit quantum computers, however, the security ideas can be applied to other types of quantum computers.

 

Prof. Jakub Szefer’s research focuses on computer architecture and hardware security. His research encompasses secure processor architectures, cloud security, FPGA (Field Programmable Gate Array) attacks and defenses, hardware FPGA implementation of cryptographic algorithms, and most recently quantum computer cybersecurity. Among others, Prof. Szefer is the author of first book focusing on processor architecture security: “Principles of Secure Processor Architecture Design”, published in 2018, and he is a co-editor of a book on “Security of FPGA-Accelerated Cloud Computing Environments”, published in 2023. He is recipient of awards such as NSF CAREER award, and is a senior member of IEEE (2019) and ACM (2022).

 

Jakob Szefer, Yale University
DCC 337 11:00 am

Feb
14
2024
Co-Design of Quantum Software and Hardware: The Pulse-Level Paradigm Shift

In this talk, I will provide an overview of my contributions to quantum computing, specifically focusing on hardware software co-design for quantum computing by diving into the pulse level. Transitioning to a pulse-level workflow paradigm can reduce the circuit duration of quantum programs, enabling the execution of deeper quantum circuits on quantum machines with the same decoherence time. This shift in focus has been substantiated by my research to enhance solutions in practical areas such as quantum machine learning, quantum finance, and quantum chemistry. In the first part of the talk, I will introduce QPulse, a work that introduces a set of designs for parameterized pulses and evaluates them based on specific metrics, including their expressive capacity, entanglement capabilities, and effective parameter dimensions. Then, I will present NAPA, a cutting-edge native-pulse ansatz generator framework specifically tailored for variational quantum algorithms. By progressively searching the pulse-level circuit architecture, we build a pulse ansatz that demonstrates a significant advantage over the gate-level quantum circuit on benchmarking tasks. Finally, I will discuss my ongoing work and future research towards building efficient design automation tools and scalable hybrid classical-quantum algorithms to implement quantum technologies for practical real-world applications.

Bio: Zhiding Liang is currently a postgraduate student studying for his Ph.D. degree in the Department of Computer Science and Engineering at the University of Notre Dame under the supervision of Prof. Yiyu Shi. His current research interests include hardware-software co-design for quantum computing and quantum machine learning. The results of his research have been published in prestigious conferences and journals, including DAC, ICCAD, QCE, TCAD, and TVCG. He has been selected as a DAC Young Fellow in both 2021 and 2022. He has also been nominated as the recipient of the Edison Innovation Fellowship by the IDEA Center at the University of Notre Dame. He is devoted to quantum education and outreach; he is the co-founder of the Quantum Computer System (QuCS) Lecture Series, an impactful public online lecture series in the quantum computing community. He also led the organization of the first ACM/IEEE Quantum Computing for Drug Discovery Challenge at ICCAD, a top-tier computer science conference. He is one of the major contributors to the TorchQuantum library, which has been adopted by IBM Qiskit Ecosystem and PyTorch Ecosystem with 1.1K+ stars on GitHub. He received the B.S. in Electrical Engineering from the University of Wisconsin-Madison.

Zhiding Liang, University of Notre Dame
CII 3206 12:00 pm

Feb
7
2024
Types and Metaprogramming for Correct, Safe, and Performant Software Systems

In this talk, I will present an overview of my research, which provides novel directions for building correct, safe, and performant software systems through the use of programming languages and compiler techniques. In the first part of the talk, I will introduce reachability type systems, a family of static type systems aiming at tracking sharing, separation, and side-effects in higher-order imperative programs. Reachability types provide a smooth integration of Rust-style ownership types with higher-level programming abstractions, such as first-class functions. In the second part, I will discuss how metaprogramming techniques can aid in building correct, flexible, and performant program analyzers. I will introduce GenSym, a parallel symbolic-execution compiler derived from a high-level definitional symbolic interpreter using program generation techniques. GenSym generates code in continuation-passing style to perform parallel symbolic execution of LLVM IR programs, and significantly outperforms similar state-of-the-art tools. The talk will also cover my future research agenda, such as applications of reachability types in quantum computing.

 

Bio: Guannan Wei is currently a postdoctoral researcher at Purdue University. His research interests lie in programming languages and software engineering, including designing better programming languages and program analyzers with high-level programming abstractions. His contributions have been published in flagship programming languages and software engineering venues, such as POPL, OOPSLA, ICFP, ECOOP, ICSE, and ESEC/FSE. Guannan received his PhD degree (2023) in Computer Science from Purdue University, advised by Tiark Rompf. He obtained his MS degree in Computer Science from the University of Utah. He is the 2022 recipient of the Maurice H. Halstead Memorial Award for Software Engineering Research. More of Guannan’s work can be found at https://continuation.passing.style.

Guannan Wei, Purdue University
CII 3206 12:00 pm

Jan
31
2024
ECSE/CS Joint Seminar: Intelligent Cross-Stack Co-Design of Quantum Computer Systems

Quantum Computing has the potential to solve classically intractable problems with greater speed and efficiency, and recent several years have witnessed exciting advancements in this domain. However, there remains a substantial gap between the algorithmic requirements and the available device in terms of qubit number and system reliability. To close this gap, it is critical to perform the cross-stack co-design of various technology layers, from algorithm and program design, to compilation, and hardware architecture.

In this talk, I will provide an overview of my contributions in the software stack and hardware support for quantum systems. At the algorithm and program level, I will introduce QuantumNAS, a framework for quantum program structure (ansatz) design for variational quantum algorithms. QuantumNAS utilizes the noisy feedback from quantum devices to search for ansatz and qubit mapping tailored for specific hardware, leading to notable resource reduction and reliability enhancements. Then, at the compiler level, I will discuss a compilation framework for the Field-Programmable Qubit Array (FPQA) implemented by the emerging reconfigurable atom arrays. This framework leverages movable atoms for routing 2Q gates, and generates atom movements and gate scheduling with high scalability and parallelism. On the hardware support front, I will present SpAtten, an algorithm-architecture-circuit co-design aimed at Transformer-based quantum error correction decoding. SpAtten supports on-the-flying syndrome pruning to eliminate less critical inputs and boost efficiency. Finally, I will conclude with an overview of my ongoing work and my research vision towards building software and architecture supports for quantum computing, and domain-specific computing for practical quantum advantages.

Hanrui Wang is a Ph.D. Candidate at MIT EECS advised by Prof. Song Han. His research focuses on software stack and hardware support for quantum computer systems, and AI for quantum. His work appears in conferences such as MICRO, HPCA, QCE, DAC, ICCAD, and NeurIPS and has been recognized by QCE 2023 Best Paper Award, ICML RL4RL 2019 Best Paper Award, ACM student research competition 1st Place Award, Best Poster Award at NSF AI Institute, Best Demo Award at DAC university demo, and MLCommons rising star in machine learning and systems. His work is supported by Qualcomm Innovation Fellowship, Baidu Fellowship, and Unitary Fund. He is the creator of TorchQuantum library which has been adopted by IBM Qiskit Ecosystem and PyTorch Ecosystem with 1.1K+ stars on GitHub. He is passionate about teaching and has served as a course developer and co-instructor for a new course on efficient ML and quantum computing at MIT. He is also the co-founder of QuCS "Quantum Computer Systems" forum for quantum education.

Hanrui Wang, MIT Ph.D. candidate, MIT
DCC 318 4:00 pm

2023

Dec
1
2023
Computer Science Poster Session

Michael Lenyszyn

Advisor: Konstantin Kuzmin

Title: Author Disambiguation

 

Sean Patch

Advisor: Radoslav Ivanov

Title: Tree Identification and Segmentation

 

Andrew Wilkerson

Advisor: Carlos Varela

Title: Session Types in SALSA

 

Sikai Ruan

Advisor: Lirong Xia

Title: Adversarial Training with Robust Loss Function

Computer Science Graduate Students
Lally 209B 4:00 pm

Nov
10
2023
Hacking, Cracking, Crypto at RPI

Just over 10 years ago, the RPISEC hacking club started at RPI. Throughout the years, students were trained by the club and went on to become successful entrepreneurs, experts in military and intelligence agencies, and cybersecurity researchers for Microsoft, Google, and Apple. They also traveled the country and globe winning hacking competitions. These alumni have now started an endowment for RPISEC and are now funding equipment, training, and travel. 

In this seminar, we demonstrate hacking techniques taught by the club and talk about how to get involved. The following day, the speaker and alumnus of the club will be teaching an 8-hour training on campus to get anyone started in the field. 

 

Jeremy Blackthorne is a co-founder of the Boston Cybernetics Institute. He was a researcher at MIT Lincoln Laboratory, where he focused on building and breaking cyber solutions for the U.S. government. Before that, Jeremy was a scout sniper in the U.S. Marine Corps and completed three tours in Iraq. He has a master’s in computer science and is an alumnus of RPISEC. 

Jeremy Blackthorne, Boston Cybernetics Institute
Sage 4101 4:00 pm

Oct
26
2023
Cryptis: Cryptographic Reasoning in Separation Logic

In this presentation, I'll talk about Cryptis, a tool we have been developing for verifying systems that feature cryptographic components, such as a key-value store server that connects to clients using some encryption protocol.  Cryptis is a separation logic embedded in the Coq proof assistant, which can be used to describe the implementation of protocols and systems that use these protocols.

 

Cryptis can check proofs of correctness involving arguments in the symbolic model of cryptography, thus guaranteeing that protocols can deliver strong security guarantees.

 

Bio:  Arthur Azevedo de Amorim joined the Department of Computer Science at RIT in 2023 as an assistant professor.  His research interests revolve around the use of programming-language and software-verification techniques to improve the security and reliability of software.

Arthur Azevedo de Amorim, Rochester Institute of Technology
Bruggeman Room 2:00 pm

Oct
2
2023
Building the Tools to Program a Quantum Computer

A quantum computer is as hard for us to build as for us to understand the algorithms they are built to run. In order to deliver asymptotic advantage over classical algorithms, quantum algorithms exploit inherently quantum phenomena – the ability for data to exist in a superposition of multiple states, exhibit constructive and destructive interference, and leverage the spooky phenomenon of entanglement. However, without appropriate and delicate manipulation of the quantum state stored by the computer, an implementation of an algorithm will produce incorrect outputs or lose its quantum computational advantage.

As a result, developers will face challenges when programming a quantum computer to correctly realize quantum algorithms. In this talk, I present these programming challenges and what we can do to overcome them. In particular, I address how basic programming abstractions – such as data structures and control flow – fail to work correctly on a quantum computer, and the progress we’ve made in re-inventing them to meet the demands of quantum algorithms.

Bio: Charles Yuan is a Ph.D. student at MIT working with Michael Carbin whose research interests lie in programming languages for quantum computation. His work has appeared in the ACM SIGPLAN POPL and OOPSLA conferences and has been recognized with the SIGPLAN Distinguished Artifact Award and the CQE-LPS Doc Bedard Fellowship.

 

Charles Yuan, MIT
Bruggeman Conference Center 3:30 pm

Sep
13
2023
On machine learning and computational nanoscience

Machine learning has revolutionized the usage of data, and proven of tremendous applicability due to its ability to find relations in data. One area of application is nanoscience, specifically, the investigation of monolayer protected nanoclusters (MPCs). Experimental research on MPCs requires expensive materials and equipment, and traditional computational research requires costly computation resources. ML is used in this context to alleviate the computational costs by utilizing distance-based regression models as surrogates for expensive Density Functional Theory-based calculations. Our research has so far primarily focused on feature selection, since MPCs provide high-dimensional data.

Bio: Joakim Linja received his Ph.D. degree in Mathematical Information Science from the University of Jyväskylä (JYU) in April 2023. He received his Masters degree in physics from JYU in 2017, specializing in nanoscience and computational science. He is currently working as a post doctoral scholar at JYU. His research interests lie in high-performance computing, machine learning, GPU computation, nanoscience and physics.

Joakim Linja, University of Jyväskylä (JYU)
Sage 4510 11:00 am

Aug
23
2023
An Additive Autoencoder with no need of deep learning

: Deep Learning techniques are underlying many amazing accomplishments in artificial intelligence and machine learning. Their theory does not match empirical achievements, but the applicable results have largely been in favor of DL. In our recently published paper [1], we question this belief. In the context of autoencoding, i.e., nonlinear dimension encoding-decoding, we propose a new, additive model that strictly separates approximation of bias, linear behavior, and nonlinear behavior. With this approximation, we encountered no help or even need of deeper network structures to encapsulate nonlinear behavior. We also witnessed worse data reconstruction results when typical data-batch driven optimization techniques were applied to train the additive autoencoder. It would be really an interesting endeavor to address the underlying reasons of the observed behavior of our extensive set of empirical experiments.

[1] Kärkkäinen, T., & Hänninen, J. (2023). Additive autoencoder for dimension estimation. Neurocomputing, Volume 551, 126520.

Biography: Tommi Kärkkäinen (TK) received the Ph.D. degree in Mathematical Information Technology from the University of Jyväskylä (JYU), in 1995. Since 2002 he has been serving as a full professor of Mathematical Information Technology at the Faculty of Information Technology (FIT), JYU. TK has led 50 different R&D projects and has been supervising over 60 PhD students. He has published over 200 peer-reviewed articles and received the Innovation Prize of JYU in 2010. He has served in many administrative positions at FIT and JYU, currently leading a Research Division and a Research Group on Human and Machine based Intelligence in Learning. The main research interests include data mining, machine learning, learning analytics, and nanotechnology. He is a senior member of the IEEE.

Tommi Kärkkäinen , Faculty of Information Technology, JYU
Sage 5101 2:00 pm

Apr
26
2023
Near-Data Computing: Hype or Game Changer?

 As a very hot topic today, near-data computing has a beautifully simple rationale: Moving computational tasks closer to where data reside could improve the overall system performance/efficiency. However, its large-scale commercial success has remained elusive so far, despite countless awesome research papers and 100s millions of dollars spent on its R&D. This disappointing status quo warrants doubts and skepticisms: Will it turn out to be a hype just like many others we have seen over the years? Are there any fatal flaws in this simple idea? Facing these questions, proponents of near-data computing must be brutally honest to themselves and humbly search for the (inconvenient) truth, other than conveniently blaming the industryÕs reluctance/laziness on embracing disruptive technologies. This talk will discuss the pitfalls of prior and on-going R&D efforts, and present the correct (or at least the most convenient) way to commence the commercialization journey of near-data computing. This talk will also show that there is still a huge space for research innovations in this area, despite intensive research over the past 20 years.

Bio: Tong Zhang is currently a Professor in the Electrical, Computer and Systems Engineering Department at Rensselaer Polytechnic Institute (RPI), NY. In 2002, he received the Ph.D. degree in electrical engineering from the University of Minnesota and joined the faculty of RPI. He has graduated 20 PhD students, and authored/co-authored over 160 papers, with citation h-index of 43. Among his research accomplishments, he made pioneering contributions to enabling the pervasive use of low-density parity-check (LDPC) code in commercial HDDs/SSDs and establishing the research area of flash memory signal processing. He co-founded ScaleFlux (San Jose, CA) to spearhead the commercialization of near-data computing, and currently serves as its Chief Scientist. He is an IEEE Fellow.

Tong Zhang, Rensselaer Polytechnic Institute
SAGE 3510 11:00 am

Apr
19
2023
Elastic Algorithm-Architecture Co-Design for Scalable and Energy-Efficient ML

Machine Learning (ML) techniques, especially Deep Neural Networks (DNNs), have been driving innovations in many application domains. These breakthroughs are powered by the computational improvements in processor technology driven by Moore's Law. However, the need for computational resources is insatiable when applying ML to large-scale real-world problems. Energy efficiency is another major concern of large-scale ML. The enormous energy consumption of ML models not only increases costs in data-centers and decreases battery life of mobile devices but also has a severe environmental impact. Entering the post-Moore’s Law era, how to keep up performance and energy-efficiency with the scaling of ML remains challenging.

This talk addresses the performance and energy-efficiency challenges of ML. The core hypothesis can be encapsulated in a few questions. Do we need all the computations and data movements involved in conventional ML processing? Does redundancy exist at the hardware level? How can we better approach large-scale ML problems with new computing paradigms? This talk presents how to explore the elasticity in ML processing and hardware architectures: from the algorithm perspective, redundancy-aware processing methods are proposed for DNN training and inference, as well as large-scale classification problems and long-range Transformers; from the architecture perspective, balanced, specialized, and flexible designs are presented to improve efficiency.

Bio: Liu Liu is an Assistant Professor in the department of Electrical, Computer, and Systems Engineering at RPI. He has a Ph.D. in Computer Science at the University of California, Santa Barbara. His research interests reside in the intersection between computer architecture and machine learning, towards high-performance, energy-efficient, and robust machine intelligence.

Liu Liu, Rensselaer Polytechnic Institute
Sage 3510 11:00 am

Apr
14
2023
Towards Distributed MLOps: Theory and Practice

As machine learning (ML) technologies get widely applied to many domains, it has become essential to rapidly develop and deploy ML models. Towards this goal, MLOps has recently emerged as a set of tools and practices for operationalizing production-ready models in a reliable and efficient manner. However, several open problems exist, including how to automate the ML pipeline that includes data collection, model training, and deployment (inference) with support for distributed data and models stored at multiple sites. In this talk, I will cover some theoretical foundations and practical approaches towards enabling distributed MLOps, i.e., MLOps in large-scale distributed systems. I will start with explaining the requirements and challenges. Then, I will describe how our recent theoretical developments in the areas of coreset, federated learning, and model uncertainty estimation can support distributed MLOps. As a concrete example, I will dive into the details of a federated learning algorithm with flexible control knobs, which adapts the learning process to accommodate time-varying and unpredictable resource availabilities, as often seen in systems in operation, while conforming to a given budget for model training. I will finish the talk by giving an outlook on some future directions.

 

Bio: Shiqiang Wang is a Staff Research Scientist at IBM T. J. Watson Research Center, NY, USA. He received his Ph.D. from Imperial College London, United Kingdom, in 2015. His current research focuses on the intersection of distributed computing, machine learning, networking, and optimization, with a broad range of applications including data analytics, edge-based artificial intelligence (Edge AI), Internet of Things (IoT), and future wireless systems. He received the IEEE Communications Society (ComSoc) Leonard G. Abraham Prize in 2021, IEEE ComSoc Best Young Professional Award in Industry in 2021, IBM Outstanding Technical Achievement Awards (OTAA) in 2019, 2021, and 2022, and multiple Invention Achievement Awards from IBM since 2016. For more details, please visit his homepage at: https://shiqiang.wang

 

Shiqiang Wang, IBM T.J. Watson Research Center
SAGE 3510 1:00 pm

Apr
7
2023
Computer Science Poster Session

Vasundhara Acharya
Advisor: Prof. Bulent Yener
Title:
Tuberculosis Prediction from Lung Tissue Images of Diversity Outbred Mice using Cell Graph Neural Network

Lorson Blair
Advisor: Prof. Stacy Patterson
Title:
A Continuum Approach for Collaborative Task Processing in UAV MEC Networks

Jesse Ellin
Advisor: Prof. Alex Gittens
Title:
Knowledge Graph Anomaly Detection via Probabilistic GANs

Shawn George
Advisor: Prof. Konstantin Kuzmin
Title:
Synergy: Abstract2Gene

William Hawkins
Advisor: Prof. George Slota
Title:
Accelerating Graph Neural Network Training using Dynamic Mode Decomposition

Neha Deshpande
Advisor: Prof. Chuck Stewart
Title:
Tusk Detection for Elephant Re-Identification

Ian Conrad
Advisor: Prof. Sibel Adali
Title:
Contextualized Moral Foundations Analysis

Ruixiong Hu
Advisor: Mark Shephard
Title:
Mesh Adaptation in Multilayer Laser Powder Bed Fusion

Ashley Choi
Advisor: Prof. Sibel Adali
Title:
News Story Collection API and Visualization

Ohad Nir
Advisor: Prof. Chuck Stewart
Title:
Detection of Capuchin Monkeys

Connor Wooding
Advisor: Prof. George Slota
Title:
GPU Parallelization for Biconnectivity Algorithms

Roman Nett
Advisor: Prof. Bulent Yener
Title:
Graph Neural Network Using Local Cell Graph Features for Cancer Classification

Andy Bernhardt
Advisor: Prof. Tomek Strzalkowski
Title:
Imageability as an Indicator of Authorship

Jacy Sharlow
Advisor: Prof. Barb Cutler
Title:
Automating the Artistic Pipeline Regarding Skin Wrinkling in the Geometric Space

Steven Laverty
Advisor: Prof. Mohammed Zaki
Title:
Protein Folding with Deep RL

Zachary Fernandes
Advisor: Prof. Mei Si
Title:
Investigating the Impact of Self-Attention on Reinforcement Learning

Seth Laurenceau
Advisor: Prof. Ana Milanova
Title:
Verification of Python Docstrings

Ryan Kaplan
Advisor: Prof. Alex Gittens
Title:
Transfer Learning on Images for Graph Problems

Mohammed Shahid Modi
Advisor: Prof. Bolek Szymanski
Title:
Poster on Dynamics of Ideological Bias Shifts of Users on Social Media Platforms

Dhruva Hiremagalur Narayan
Advisor: Prof. Mohammed Zaki
Title:
Understanding forms using graph neural networks

Harshaa Hiremagalur Narayan
Advisor: Prof. Mohammed Zaki
Title:
Using BERT-GCN on embeddings created using dictionary word

Computer Science Graduate Students
Lally 104 4:00 pm

Apr
3
2023
Deep Learning for Drug Discovery and Development

Artificial intelligence (AI) has become woven into therapeutic discovery to accelerate drug discovery and development processes since the emergence of deep learning. For drug discovery, the goal is to identify drug molecules with desirable pharmaceutical properties. I will discuss our deep generative models that relax the discrete molecule space into a differentiable one and reformulate the combinatorial optimization problem into a differentiable optimization problem, which can be solved efficiently. On the other hand, drug development focuses on conducting clinical trials to evaluate the safety and effectiveness of the drug on human bodies. To predict clinical trial outcomes, I design deep representation learning methods to capture the interaction between multi-modal clinical trial features (e.g., drug molecules, patient information, disease information), which achieves 0.847 F1 score in predicting phase III approval. Finally, I will present my future works in geometric deep learning for drug discovery and predictive model for drug development.

Bio: Tianfan Fu is a Ph.D. candidate in the School of Computational Science and Engineering at the Georgia Institute of Technology, advised by Prof. Jimeng Sun. His research interest lies in machine learning for drug discovery and development. Particularly, he is interested in generative models on both small-molecule & macro-molecule drug design and deep representation learning on drug development. The results of his research have been published in leading AI conferences, including AAAI, AISTATS, ICLR, IJCAI, KDD, NeurIPS, UAI, and top domain journals such as Nature, Cell Patterns, Nature Chemical Biology, and Bioinformatics. His work on clinical trial outcome prediction has been selected as the cover paper on Cell Patterns. In addition, Tianfan is an active community builder. He co-organized the first three AI4Science workshop on leading AI conferences (https://ai4sciencecommunity.github.io/); he co-founded Therapeutic Data Commons (TDC) initiative (https://tdcommons.ai/), an ecosystem with AI-solvable tasks, AI-ready datasets, and benchmarks in therapeutic science. Additional information is available at https://futianfan.github.io/.

Tianfan Fu, Georgia Institute of Technology
SAGE 5101 11:00 am

Mar
29
2023
Advancing Data-Aspect AI: From Efficient Data Annotation to High-Quality Data Creation

The rapid progress of deep learning in recent years has led to significant advances in various fields such as computer vision, natural language processing, and speech recognition. The success of deep learning models heavily relies on the availability of large-scale and high-quality datasets. To address this challenge, active learning is a representative strategy that interactively queries human annotators for efficient data annotation. I will discuss how to design both theoretically and empirically effective active learning strategy for deep neural networks. On the other hand, powerful deep learning models have the potential to create high-quality data for human needs nowadays. I will demonstrate this by exploring recent advancements in generative methods. By taking the neural style transfer problem as an example, I will discuss how to achieve a desirable balance between content, style, and visual quality when creating visual contents. I will also share potential future directions of data-aspect AI, as well as the applications to biomedical domains.


Bio: Dr. Siyu Huang is a postdoctoral fellow at the John A. Paulson School of Engineering and Applied Sciences, Harvard University. He received his B.E. and Ph.D. degrees from Zhejiang University in 2014 and 2019, respectively. Prior to joining Harvard, he was a visiting scholar at Carnegie Mellon University in 2018, a research scientist at Baidu Research from 2019 to 2021, and a research fellow at Nanyang Technological University in 2021. His research interests include computer vision, deep learning, and generative AI, with 30 publications on top-tier conferences and journals.

Siyu Huang, Harvard University
Sage 3510 11:00 am

Mar
27
2023
Software Quality Assessment via Specification Synthesis

Program specifications provide clear and precise descriptions of behaviors of a software system, which serves as a blueprint for its design and implementation. They help ensure that the system is built correctly and the functions work as intended, making it easier to troubleshoot, modify, and verify the system if needed. NIST suggests that the lack of high-quality specifications is the most common cause of software project failure. Nowadays, successful projects have an equal or even higher number of specifications than code (counted by lines).

 

In this talk, I will present my research on synthesizing both informal and formal specifications for software systems. I will explain how we use a combination of program and natural language semantics to automatically generate informal specifications, even for native methods without implementation in Java which previous methods could not handle. By leveraging the generated specifications, we successfully detect many code bugs and code-comment inconsistencies. Additionally, I will describe how we derive formal specifications from natural language comments using a search-based technique. The generated formal specifications have been applied to facilitate program analysis for existing tools. They have been shown to greatly improve the capabilities of these tools, by detecting many new information leaking paths and reducing false alarms in testing. Overall, the talk will highlight the importance of program specifications in software engineering and demonstrate the potential of our techniques to improve the development and maintenance of software systems.

 

Bio:

Juan Zhai is an Assistant Teaching Professor in the Department of Computer Science at Rutgers University. Previously, she was a Postdoctoral Research Associate, working with Prof. Xiangyu Zhang in the Department of Computer Science at Purdue University. She also worked as a tenure-track Assistant Professor at Nanjing University, where she obtained her Ph.D. degree. Her research interests lie in software engineering, natural language processing, and security, focusing on specification synthesis and enforcement. She is the recipient of the Distinguished Paper Award of USENIX Security 2017 and the Outstanding Doctoral Student Award in NASAC 2016.

Juan Zhai, Rutgers University
Sage 5101 12:00 pm

Mar
20
2023
Reasoning with visual imagery: Research at the intersection of autism, AI, and visual thinking

While decades of AI research on high-level reasoning have yielded many techniques for many tasks, we are still quite far from having artificial agents that can just “sit down" and perform tasks like intelligence tests without highly specialized algorithms or training regimes.  We also know relatively little about how and why different people approach reasoning tasks in different (often equally

successful) ways, including in neurodivergent conditions such as autism. In this talk, I will discuss: 1) my lab's work on AI approaches for reasoning with visual imagery to solve intelligence tests, and what these findings suggest about visual cognition in autism; 2) how imagery-based agents might learn their domain knowledge and problem-solving strategies via search and experience, instead of these components being manually designed, including recent leaderboard results on the very difficult Abstraction & Reasoning Corpus (ARC) ARCathon challenge; and 3) how this research can help us understand cognitive strategy differences in people, with applications related to neurodiversity and employment.  I will also discuss 4) our Film Detective game that aims to visually support adolescents on the autism spectrum in improving their theory-of-mind and social reasoning skills.

 

Bio:  Maithilee Kunda is an assistant professor of computer science at Vanderbilt University. Her work in AI, in the area of cognitive systems, looks at how visual thinking contributes to learning and intelligent behavior, with a focus on applications related to autism and neurodiversity. She directs Vanderbilt’s Laboratory for Artificial Intelligence and Visual Analogical Systems and is a founding investigator in Vanderbilt’s Frist Center for Autism & Innovation.

She has led grants from the US National Science Foundation and the US Institute of Education Sciences and has also collaborated on large NSF Convergence Accelerator and AI Institute projects.  She has published in Proceedings of the National Academy of Sciences (PNAS) and in the Journal of Autism and Developmental Disorders (JADD), the premier journal for autism research, as well as in AI and cognitive science conferences such as ACS, CogSci, AAAI, ICDL-EPIROB, and DIAGRAMS, including a best paper award at the ACS conference in 2020.  Also in 2020, her research on innovative methods for cognitive assessment was featured on the national news program CBS 60 Minutes, as part of a segment on neurodiversity and employment.  She holds a B.S. in mathematics with computer science from MIT and Ph.D. in computer science from Georgia Tech.

Maithilee Kunda , Vanderbilt University
Sage 5101 12:00 pm

Mar
15
2023
Empowering Graph Neural Networks from a Data-Centric View

Many learning tasks in Artificial Intelligence require dealing with graph data, ranging from biology and chemistry to finance and education. As powerful learning tools for graph inputs, graph neural networks (GNNs) have demonstrated remarkable performance in various applications. Despite their success, unlocking the full potential of GNNs requires tackling the limitations of robustness and scalability. In this talk, I will present a fresh perspective on enhancing GNNs by optimizing the graph data, rather than designing new models. Specifically, first, I will present a model-agnostic framework which improves prediction performance by enhancing the quality of an imperfect input graph. Then I will show how to significantly reduce the size of a graph dataset while preserving sufficient information for GNN training.

Wei Jin, Michigan State University
SAGE 3510 11:00 am

Mar
13
2023
Make Knowledge Computable: Towards Differentiable Neural-Symbolic Reasoning

My ultimate research vision is to develop an AI model that can emulate human reasoning and thinking, which requires building a differentiable Neural-Symbolic AI. This approach involves enabling neural models to interact with external symbolic modules, such as knowledge graphs, logical engines, math calculators, and physical/chemical simulators. This will facilitate end-to-end training of such a Neural-Symbolic AI system without annotated intermediate programs.

During this talk, I will introduce my two research endeavors focused on building differentiable neural symbolic AI using knowledge graphs. Firstly, I will discuss how Symbolic Reasoning can help Neural Language Models. I designed OREO-LM, which incorporates knowledge graph relational reasoning into a Large Language Model, significantly improving multi-hop question answering using a single model. Secondly, I will discuss how Neural Embedding can help Symbolic Logic Reasoning. I solve complex first-order logic queries in neural embedding space, using fuzzy logic operators to create a learning-free model that fulfills all logic axioms. Finally, I will discuss my future research plans on applying differentiable neural-symbolic AI to improve program synthesis, architecture design, and scientific discovery. 

Bio:  Ziniu Hu is a fifth-year PhD student in computer science at UCLA. His research focuses on integrating symbolic knowledge reasoning with neural models. Under the guidance of Professors Yizhou Sun and Kai-Wei Chang, he has developed several models that have successfully solved complex question-answering and graph mining problems. His research has received support from Baidu Ph.D. Fellowship and Amazon Ph.D. Fellowship. He also contributed to the research community as the research-track workflow co-chair for KDD'23 and was awarded the top reviewer at NeurIPS'22. His research has been deployed on various industrial applications, including Tiktok unbiased Recommendation, Google YouTube Shorts recommendation, Microsoft Graph anomaly detection, and Facebook hate speech detection service. His research has received several awards, including the best paper award at WWW'19, the best student paper award at DLG-KDD'20 workshop, and the best paper award at SoCal-NLP'22.

Ziniu Hu, University of California, Los Angeles
Sage 5101 12:00 pm

Mar
1
2023
Towards Deep Semantic Understanding: Event-Centric Multimodal Knowledge Acquisition

Traditionally, multimodal information consumption has been entity-centric with a focus on concrete concepts (such as objects, object types, physical relations, e.g., a person in a car), but lacks ability to understand abstract semantics (such as events and semantic roles of objects, e.g., driver, passenger, mechanic). However, such event-centric semantics are the core knowledge communicated, regardless whether in the form of text, images, videos, or other data modalities.

At the core of my research in Multimodal Information Extraction (IE) is to bring such deep semantic understanding ability to the multimodal world. My work opens up a new research direction Event-Centric Multimodal Knowledge Acquisition to transform traditional entity-centric single-modal knowledge into event-centric multi-modal knowledge. Such a transformation poses two significant challenges: (1) understanding multimodal semantic structures that are abstract (such as events and semantic roles of objects): I will present my solution of zero-shot cross-modal transfer (CLIP-Event), which is the first to model event semantic structures for vision-language pretraining, and supports zero-shot multimodal event extraction for the first time; (2) understanding long-horizon temporal dynamics: I will introduce Event Graph Model, which empowers machines to capture complex timelines, intertwined relations and multiple alternative outcomes. I will also show its positive results on long-standing open problems, such as timeline generation, meeting summarization, and question answering. Such Event-Centric Multimodal Knowledge Acquisition starts the next generation of information access, which allows us to effectively access historical scenarios and reason about the future. I will lay out how I plan to grow a deep semantic understanding of language world and vision world, moving from concrete to abstract, from static to dynamic, and ultimately from perception to cognition.

Bio: Manling Li is a Ph.D. candidate at the Computer Science Department of University of Illinois Urbana-Champaign. Her work on multimodal knowledge extraction won the ACL'20 Best Demo Paper Award, and the work on scientific information extraction from COVID literature won NAACL'21 Best Demo Paper Award. She was a recipient of Microsoft Research PhD Fellowship in 2021. She was selected as a DARPA Riser in 2022, and a EE CS Rising Star in 2022. She was awarded C.L. Dave and Jane W.S. Liu Award, and has been selected as a Mavis Future Faculty Fellow. She led 19 students to develop the UIUC information extraction system and ranked 1st in DARPA AIDA evaluation in 2019 and 2020. She has more than 30 publications on multimodal knowledge extraction and reasoning, and gave tutorials about event-centric multimodal knowledge at ACL'21, AAAI'21, NAACL'22, AAAI'23, etc. Additional information is available at https://limanling.github

Manling Li , University of Illinois Urbana-Champaign
SAGE 3510 11:00 am

Feb
27
2023
Decentralized intelligence

Today connected devices, as well as various smart sectors generate a significant amount of data. Tailoring machine learning algorithms to exploit this massive amount of data can lead to many new applications and enable ambient intelligence. The question is how to use this decentralized data to enhance the system intelligence beneficial for everyone while protecting the sensitive information. It is not desirable to offload such massive amounts of data available at the edge devices to a cloud server for centralized processing due to storage, latency, bandwidth, and power constraints, as well as privacy concerns of users. Furthermore, due to the growing storage and computational capabilities of the edge devices, it is increasingly attractive to store and process the data locally by shifting network computations to the edge. This enables decentralized intelligence where local computations on the data converts decentralized data to a global intelligence; hence, enhancing data privacy while learning from the collection of data available across the network. In this talk, I highlight some of the challenges and advances in enabling decentralized intelligence by integrating computations, collaboration, and communications, the three essential components of enabling collective intelligence. 

Bio: Mohammad received the B.Sc. degree in Electrical Engineering from the Iran University of Science and Technology in 2011 and the M.Sc. degree in Electrical and Computer Engineering from the University of Tehran in 2014, both with the highest rank in classes. He also obtained the Ph.D. degree in Electrical and Electronic Engineering at Imperial College London in 2019. He then spent two years as a Postdoctoral Research Associate in the Department of Electrical and Computer Engineering at Princeton University. He is currently a Postdoctoral Associate at MIT where he joined in early 2022. He received the Best Ph.D. Thesis Award from the Department of Electrical and Electronic Engineering at Imperial College London, as well as the IEEE Information Theory Chapter of UK and Ireland in the year 2019. He is also the recipient of the IEEE Communications Society Young Author Best Paper Award (2022) for the paper titled "Federated learning over wireless fading channels". His research interests include machine learning, information theory, distributed computing, privacy and security, and data science.

Mohammad Mohammadi Amiri, Massachusetts Institute of Technology
Sage 5101 12:00 pm

Feb
23
2023
Data Markets and Learning: Privacy Mechanisms and Personalization

The fuel of machine learning models and algorithms is the data usually collected from users, enabling refined search results, personalized product recommendations, informative ratings, and timely traffic data. However, increasing reliance on user data raises serious challenges. A common concern with many of these data-intensive applications centers on privacy — as a user’s data is harnessed, more and more information about her behavior and preferences is uncovered and potentially utilized by platforms and advertisers. These privacy costs necessitate adjusting the design of data markets to include privacy-preserving mechanisms. 

This talk establishes a framework for collecting data of privacy-sensitive strategic users for estimating a parameter of interest (by pooling users' data) in exchange for privacy guarantees and possible compensation for each user.  We formulate this question as a Bayesian-optimal mechanism design problem, in which an individual can share her data in exchange for compensation but at the same time has a private heterogeneous privacy cost which we quantify using differential privacy. We consider two popular data market architectures: central and local. In both settings, we use Le Cam's method to establish minimax lower bounds for the estimation error and derive (near) optimal estimators for given heterogeneous privacy loss levels for users. Next, we pose the mechanism design problem as the optimal selection of an estimator and payments that elicit truthful reporting of users' privacy sensitivities. We further develop efficient algorithmic mechanisms to solve this problem in both privacy settings.

Finally, we consider the case that users are interested in learning different personalized parameters. In particular, we highlight the connections between this problem and the meta-learning framework, allowing us to train a model that can be adapted to each user's objective function.

 

Bio:  Alireza Fallah is a Ph.D. candidate at the department of Electrical Engineering and Computer Science (EECS) and the Laboratory for Information and Decision Systems (LIDS) at Massachusetts Institute of Technology (MIT). His research interests are machine learning theory, data market and privacy, game theory, optimization, and statistics. He has received a number of awards and fellowships, including the Ernst A. Guillemin Best MIT EECS M.Sc. Thesis Award, Apple Scholars in AI/ML Ph.D. fellowship, MathWorks Engineering Fellowship, and Siebel Scholarship. He has also worked as a research intern at the Apple ML privacy team. Before joining MIT, he earned a dual B.Sc. degree in Electrical Engineering and Mathematics from Sharif University of Technology, Tehran, Iran.

 

Alireza Fallah, Massachusetts Institute of Technology
Sage 5101 12:00 pm

Feb
15
2023
Abstractions for Taming Irregularity at the Top

Addressing the performance gap between software and hardware is one of the major challenges in computer science and engineering. Software stacks and optimization approaches have long been designed targeting regular programs — programs that operate over regular data structures such as arrays and matrices using loops, partly due to the abundance of regular programs in computer software. But irregular programs — programs that traverse over irregular or pointer-based data structures such as sparse matrices, trees, and graphs using a mix of recursion and loops — also appear in many essential applications such as simulation, data mining, graphics, etc. Loop transformation frameworks are good examples of performance-enhancing scheduling transformations for regular programs. Generally, these frameworks reason about transformations in a composable manner (i.e., reason about a sequence of transformations).

 

In the past, scheduling transformations for irregular programs were ad-hoc, and they were considered on the horizon by loop transformation frameworks. Even the few existing ones were applied in isolation, and the composability of these transformations was not studied extensively. In this talk, I will discuss a composable framework for verifying the correctness of scheduling transformations for irregular programs. We will explore the abstractions used in different parts of our framework, and I will show ways to extend these abstractions to capture a wide variety of scheduling transformations for irregular programs. Finally, I will discuss future directions on incorporating dependence analyses and data layout abstractions into this framework.

 

Bio: Kirshanthan (“Krish”) Sundararajah is a PhD candidate in the Elmore Family School of Electrical and Computer Engineering, advised by Milind Kulkarni. He earned his Bachelor's degree from the University of Moratuwa, Sri Lanka, and his Master’s degree from Purdue University. His research interests lie in the areas of compilers, programming languages, and high-performance computing. He is particularly interested in solving the performance challenges of irregular applications. He has published in top conferences such as ASPLOS, OOPSLA, and PLDI and is a recipient of the Bilsland Dissertation Fellowship.

Kirshanthan Sundararajah , Purdue University
SAGE 3510 11:00 am

Feb
13
2023
Bridging Humans and Machines: Interpretation Techniques for Trustworthy NLP

Neural network models have been pushing computers’ capacity limit on natural language understanding and generation while lacking interpretability. The black-box nature of deep neural networks hinders humans from understanding their predictions and trusting them in real-world applications. In this talk, I will introduce my effort in bridging the trustworthy gap between models and humans by developing interpretation techniques, which cover three main phases of a model life cycle—training, testing, and debugging. I will demonstrate the critical values of integrating interpretability into every state of model development: (1) making model prediction behavior transparent and interpretable during training; (2) explaining and understanding model decision-making on each test example; (3) diagnosing and debugging models (e.g., robustness) based on interpretations. I will discuss future directions on incorporating interpretation techniques with system development and human interaction for long-term trustworthy AI.


Bio: Hanjie Chen is a Ph.D. candidate in Computer Science at the University of Virginia. Her research interests lie in Trustworthy AI, Natural Language Processing (NLP), and Interpretable Machine Learning. She is a recipient of the Carlos and Esther Farrar Fellowship and the Best Poster Award at the ACM CAPWIC 2021. Her work has been published at top-tier NLP/AI conferences (e.g., ACL, AAAI, EMNLP, NAACL) and selected by the National Center for Women & Information Technology (NCWIT) Collegiate Award Finalist 2021. Besides, as the primary instructor, she co-designed and taught a cross-listed course, CS 4501/6501 Interpretable Machine Learning, at UVA. Her effort in teaching was recognized by the UVA CS Outstanding Graduate Teaching Award and University-wide Graduate Teaching Awards Nominee (top 5% of graduate instructors). 

Hanjie Chen, University of Virginia
Sage 5101 12:00 pm

Feb
8
2023
Statistical inference with privacy and computational constraints

The vast amount of digital data we create and collect has revolutionized many scientific fields and industrial sectors. Yet, despite our success in harnessing this transformative power of data, computational and societal trends emerging from the current practices of data science necessitate upgrading our toolkit for data analysis. In this talk, we discuss how practical considerations such as privacy and memory limits affect statistical inference tasks. In particular, we focus on two examples: First, we consider hypothesis testing with privacy constraints. More specifically, how one can design an algorithm that tests whether two data features are independent or correlated with a nearly-optimal number of data points while preserving the privacy of the individuals participating in the data set. Second, we study the problem of entropy estimation of a distribution by streaming over i.i.d. samples from it. We determine how bounded memory affects the number of samples we need to solve this problem. 

 

Bio:  Maryam Aliakbarpour is a postdoctoral researcher at Boston University and Northeastern University, where she is hosted by Prof. Adam Smith and Prof. Jonathan Ullman. Before that, she was a postdoctoral research associate at the University of Massachusetts Amherst, hosted by Prof. Andrew McGregor (from Fall 2020-Summer 2021). In Fall 2020, she was a visiting participant in the Probability, Geometry, and Computation in High Dimensions Program at the Simons Institute at Berkeley. Maryam received her Ph.D. in September 2020 from MIT, where she was advised by Prof. Ronitt Rubinfeld. Maryam was selected for the Rising Stars in EECS in 2018 and won the Neekeyfar Award from the Office of Graduate Education, MIT.

Maryam Aliakbarpour , Boston University and Northeastern University
Sage 3704 11:00 am

Feb
6
2023
Reliable Machine Learning via Integrating Context

Learning-based software and systems are deeply embedded in our lives. However, despite the excellent performance of machine learning models on benchmarks, state-of-the-art methods like neural networks often fail once they encounter realistic settings. Since neural networks often learn correlations without reasoning with the right signals and knowledge, they fail when facing shifting distributions, unforeseen corruptions, and worst-case scenarios. In this talk, I will show how to build reliable and robust machine learning by tightly integrating context into the models. The context has two aspects: the intrinsic structure of natural data, and the extrinsic structure of domain knowledge. Both are crucial: By capitalizing on the intrinsic structure in natural images, I show that we can create robust computer vision systems, even in the worst case, an analytical result that also enjoys strong empirical gains. Through the integration of external knowledge, such as causal structure, my framework can instruct models to use the right signals for visual recognition, enabling new opportunities for controllable and interpretable models. I will also talk about future work in making machine learning robust, which I hope to transform us into an intelligent society.

 

Bio:  Chengzhi Mao is a final-year Ph.D. student from the Department of Computer Science at Columbia University. He is advised by Prof. Junfeng Yang and Prof. Carl Vondrick. He received his B.S in E.E. from Tsinghua University. His research focuses on reliable and robust machine learning. His work has led to over ten publications and Orals at top conferences, which established a new generalization of robust models beyond feedforward inference. His work also connects causality to the vision domain. He serves as reviewers for several top conferences, including CVPR, ICCV, ECCV, ICLR, NeurIPS, IJCAI, and AAAI.

Chengzhi Mao , Columbia University
Sage 5101 12:00 pm