During their thesis defense, PhD candidates introduce and motivate the problems they attacked during their course of studies, defend the novelty and significance of their research, and contextualize their contributions within their field. This is the final step in the process of obtaining a PhD, and a successful defense indicates the acknowledgment of the doctoral #committee that the candidate is an expert in their field. The defense talks are open to all members of the RPI community, and we welcome those interested to attend.

2024

Mar
22
2024
Computer Science MS Poster Session

Student: Matthew Uryga
Advisor: Prof. Oshani Seneviratne
Poster Title:
DeFi Data Analysis

Student: Matthew Cirimele
Advisor: Prof. Konstatin Kuzmin
Poster Title: One-Word Natural Language Classification

Student: Daniel Savidge
Advisor: Prof. George Slota
Poster Title:
Distributed Graph Processing on GPU

Student: Daniel Chen
Advisor: Prof. Bulent Yener
Poster Title:
Graph Mining for Copper Mining Applications

Student: Mason Sklar
Advisor: Prof. Rado Ivanov
Poster Title:
Exploring Image-to-Text Robustness of Foundation Models

Student: Alexander Montes
Advisor: Prof. Lirong Xia
Poster Title:
OPRA/Group Matching Platform

Student: Michael Roberts
Advisor: Prof. Sergei Nirenburg
Poster Title:
Language Endowed Intelligent Agents in Simulated 3D Environments

Student: Zhi Zheng
Advisor: Prof. Ron Sun
Poster Title
: Leveraging Sentiment Analysis through Motivation, Personality and Dialogue Agents

Student: Jesse Huang
Advisor: Prof. Ana Milanova
Poster Title:
Static Inconsistency Detection of Python Class Inheritance

Student: Arthi Seetharaman
Advisor: Uzma Mushtaque
Poster Title:
Using GANs to alleviate data sparsity in Recommender Systems

Student: Dimitri Lopez
Advisor: Prof. Jianxi Gao
Poster Title:
Adaptability reveals the healthcare system resilience to pandemics

MS Graduate Students
Lally 102 4:30 pm

Mar
7
2024
Bergeron: Combating Adversarial Attacks by Emulating a Conscience

Artificial Intelligence alignment is the practice of encouraging an AI to behave in a manner that is compatible with human values and expectations. Research into this area has grown considerably since the introduction of increasingly capable Large Language Models (LLMs). The most effective contemporary methods of alignment are primarily weight-based: modifying the internal weights of a model to better align its behavior with human preferences. An optimal alignment process results in an AI model that is maximally helpful to its user while generating minimally harmful responses. Unfortunately, modern methods of alignment still fail to fully prevent harmful responses when faced with effective adversarial attacks. These deliberate attacks can trick seemingly aligned models into giving manufacturing instructions for dangerous materials, inciting violence, or recommending other immoral acts. To help mitigate this issue, I introduce Bergeron: a framework designed to improve the robustness of LLMs against attacks without any additional parameter fine-tuning. Bergeron is organized into two tiers; with a secondary LLM emulating the conscience of a protected, primary LLM. This framework better safeguards the primary model against incoming attacks while monitoring its output for any harmful content. Empirical analysis shows that, by using Bergeron to complement models with existing alignment training, we can improve the robustness and safety of multiple, commonly used commercial and open-source LLMs. Additionally, I demonstrate that a carefully chosen secondary model can effectively protect even much larger primary LLMs with a relatively minimal impact on Bergeron's resource usage.

Matthew Pisano, Advisor: Mei Si
Carnegie 113 or https://rensselaer.webex.com/meet/pisanm2 3:00 pm

Feb
29
2024
Incorporating Context into Knowledge Graph Completion Methods

Knowledge Graph Completion (KGC) methods serve as a valuable tool to identify missing information in a knowledge graph (KG), such as predicting a missing relation between two entities or inferring properties about an entity which does not currently exist in the KG; the results of such KGC methods can be used to enable knowledge-driven down stream tasks. To further enhance the capabilities of KGC methods and to help understand their predictions, context can play an important role -- however, our understanding and use of "context" as it relates to KGC methods has been limited in existing works, often relying on vague or ad-hoc definitions in "context-aware" KGC methods. In this thesis, we explore how to incorporate context into KGC methods from the perspectives of three use case domains (cooking recipes, event forecasting, and tabular data management) and KGC subtasks through the development of novel KGC methods. Additionally, we investigate how we can capture "context" as it relates to KGC methods in a more explicit manner through the development of an ontology model. Through this thesis' contributions, we demonstrate how context can be incorporated through a variety of different methods and tasks to achieve greater performance in difficult experimental settings, as well as how such context can be represented in our model.

Sola Shirai, Advisor: Deborah McGuinness
Winslow 1140 or https://rensselaer.webex.com/rensselaer/j.php?MTID=m63af6416c111e9d44e2ec20f9e9d1888 1:00 pm