Computer Science Colloquia & Seminars are held each semester and sponsored by the Computer Science department. Faculty invite speakers from all areas of computer science, and the talks are open to all members of the RPI community.
Computer Science Colloquia & Seminars are held each semester and sponsored by the Computer Science department. Faculty invite speakers from all areas of computer science, and the talks are open to all members of the RPI community.
Dynamic graph mining can elucidate the activity of in-network processes in diverse application domains
from social, mobile and communication networks to infrastructure and biological networks. Compared to
static graphs, the temporal information of when graph events occur is an important new dimension for improving
the quality, interpretation and utility of mined patterns. However, mining dynamic graphs poses an important,
though often overlooked, challenge: observed data must be analyzed at an appropriate temporal resolution
(timescale), commensurate with the underlying rate of application-specific processes. If the temporal resolution
is too high, evidence for ongoing processes may be fragmented in time; if it is too low, data relevant to multiple ongoing
processes may be mixed, thus obstructing discovery. Existing approaches for dynamic graph mining typically adopt
a fixed timescale (e.g., minutes, days, years), and they mine for patterns in the corresponding aggregated graph
snapshots. However, timescale-aware methods must consider non-uniform resolution across both time and the
graph, and thus, account for heterogeneous network processes evolving at varying rates in different graph regions.
In this talk I will discuss our recent works on detecting an appropriate timescale for community activity and
information propagation in socials networks.
Bio: Petko Bogdanov is an Assistant Professor at the computer science department of University at Albany –SUNY. His research interests include data mining and management with a focus on graph data and applications to bioinformatics, neuroscience, material informatics and sociology. Previously, he was a postdoctoral fellow at the department of computer science at University of California, Santa Barbara. He received his PhD and MS in Computer Science from the University of California, Santa Barbara in 2012 and his BE in Computer Engineering from Technical University of Sofia in 2005. Dr. Bogdanov is a member of the IEEE and the ACM and his research has been sponsored by grants from NSF, DARPA and ONR.
Abstract: Many real-life situations involve dividing a set of resources among agents in an impartial or equitable manner. Fair division is the formal study of such situations, accounting for the fact that different agents might have different preferences over the goods that need to be divided. In this talk, we will discuss the fair division of discrete (or indivisible) goods, and present a recent result showing that the seemingly incompatible notions of fairness and economic efficiency can be achieved simultaneously.
Bio: Rohit Vaish is a postdoctoral research associate in Prof. Lirong Xia's group at the Department of Computer Science at RPI. Prior to that, he did his PhD from Indian Institute of Science. His research lies at the intersection of computer science and economics, including topics such as voting, stable matching, fair division, and kidney exchange.
The explosion of information age has thrust us into a new, interconnected, and uncharted future. We send data in thinly-wrapped packets through digital seas, forge connections with unknown cyber-entities, and generally, we believe that we are safe in cyberspace. But as with any uncharted domain, there be dragons...
Hackers are the dragons of the digital age. But there can be good dragons, and there can be evil dragons. Maintaining the balance, through the recruitment and training of "good" hackers, is a hard challenge facing the cybersecurity community. This talk will briefly explore the current paths for laypersons to become hackers, discuss developments and resources in that area, and then move on to a look at the future: replacing us all with very clever AI algorithms, and the new, uncharted frontier of fully-autonomous hacking systems. But as with any uncharted frontier, there be dragons...
Bio: Yan Shoshitaishvili (RPI class of 2006) is an assistant professor at Arizona State University, where he leads research into automated program analysis and vulnerability identification techniques. As part of this, Yan led Shellphish's participation in the DARPA Cyber Grand Challenge, applying his research to the creation of a fully autonomous hacking system that will one day destroy us all. Underpinning this system is angr, an open-source binary analysis project founded by Yan (and others!) and carefully pushed toward sentience over the years. When he is not doing academic research, Yan is active in the competitive hacking community. In the past, he was the captain of Shellphish, one of the oldest competitve hacking groups in the world. Recently, he has founded the Order of the Overflow, whose iron fist controls DEF CON CTF, one of the "world championship" events in cybersecurity competitions. As part of this, Yan regularly deals with the question of how to get interested people into cybersecurity, and turn them into good hackers.
Predicting the outcome of future events is a fundamental problem in fields as varied as computer science, finance, political science and others. To this end, the Wisdom of Crowds principle says that an aggregate of many crowdsourced predictions can significantly outperform even expert individuals or models. In this talk, I will focus on the problem of accurately eliciting these predictions using wagering mechanisms, where participants provide both a probabilistic prediction of the event in question, and a monetary wager that they are prepared to stake.
I will discuss some recent progress on the design and analysis of wagering mechanisms. In particular, I will focus on two surprising applications of wagering mechanisms. First, they can be used to select a winner in a winner-takes-all forecasting competition, which turns out to be quite different to the standard setting where we can reward everyone. Second, I'll show that any wagering mechanism defines a corresponding allocation mechanism for dividing scarce resources among competing agents, a seemingly unrelated problem. This correspondence immediately leads to advances in both areas.
Bio: Rupert Freeman is a postdoc at Microsoft Research New York City. Previously, he received his Ph.D. from Duke University under the supervision of Vincent Conitzer. His research focuses on the intersection of artificial intelligence and economics, particularly in topics such as resource allocation, voting, and information elicitation. He is the recipient of a Facebook Ph.D. Fellowship and a Duke Computer Science outstanding dissertation award.
Refreshments will be served at 3:30pm.
In standard auction theory, bidders' values are private, payments are linear, and the auctioneer seeks to maximize some objective, such as revenue (i.e., total payments). Myerson derived an optimal auction for precisely this setting. Perhaps surprisingly, Myerson's payments also arise as the unique Bayes-Nash equilibrium of an all-pay auction with private values and linear payments. Further, it has been observed that this all-pay auction model naturally describes a contest, where all contestants pay with their time and effort, even if they walk away empty-handed. Building on this analogy, we study an optimal contest design problem where contestants' abilities are private, their costs are a convex function of their effort (think: 80/20 rule), and the designer seeks to maximize submission quality. We show that a very simple all-pay contest where the prize is distributed equally among the top quartile of contestants is always a constant factor approximation to the optimal for a large class of convex cost functions. This result matches the intuition of teachers who do not award only a single A to the very best student in the class; doing so would not maximize the overall quality of student work, because many students would be discouraged from even trying.
Brief Bio: Dr. Amy Greenwald is Professor of Computer Science at Brown University in Providence, Rhode Island. She studies game-theoretic and economic interactions among computational agents, applied to areas like autonomous bidding in wireless spectrum auctions and ad exchanges. In 2011, she was named a Fulbright Scholar to the Netherlands (though she declined the award). She was awarded a Sloan Fellowship in 2006; she was nominated for the 2002 Presidential Early Career Award for Scientists and Engineers (PECASE); and she was named one of the Computing Research Association's Digital Government Fellows in 2001. Before joining the faculty at Brown, Dr. Greenwald was employed by IBM's T.J. Watson Research Center. Her paper entitled "Shopbots and Pricebots" (joint work with Jeff Kephart) was named Best Paper at IBM Research in 2000.
Refreshments at 8:45 am
Demand for resources that are collectively controlled or regulated by society, like social services or organs for transplantation, typically far outstrips supply. How should these scarce resources be allocated? Any approach to this question requires insights from computer science, economics, and beyond; we must define objectives, predict outcomes, and optimize allocations, while carefully considering agent preferences and incentives. In this talk, I will discuss our work on weighted matching and assignment in two domains, namely living donor kidney transplantation and provision of services to homeless households. My focus will be on how effective prediction of the outcomes of matches has the potential to dramatically improve social welfare both by allowing for richer mechanisms and by improving allocations. I will also discuss implications for equity and justice.
This talk is based on joint work with Zhuoshu Li, Amanda Kube, Sofia Carrillo, Patrick Fowler, and Jason Wellen.
Sanmay Das is an associate professor in the Computer Science and Engineering Department at Washington University in St. Louis. Prior to joining Washington University, he was on the faculty at Virginia Tech and at Rensselaer Polytechnic Institute. Das received Ph.D. and S.M. degrees from the Massachusetts Institute of Technology, and an A.B. from Harvard University, all in Computer Science. Das' research is in artificial intelligence and machine learning, and especially their intersection with finance, economics, and the social sciences. He has received an NSF CAREER Award and the Department Chair Award for Outstanding Teaching at Washington University. Das has served as program co-chair of AAMAS and AMMA, and regularly serves on the senior program #committees of major AI conferences like AAAI and IJCAI.
Although each individual thinks that the decision to form a link in a social network is an autonomous, local decision, we know that this isn’t so. Larger forces are involved: human mental limitations (Dunbar’s Number), social balance that induces triangle closure and resists certain signed triads (Georg Simmel), and the flow of properties ‘over the horizon’ within networks (Christakis). Social network analysis has three phases: aggregating individual link decisions, understanding the global structure that results, and revisiting each link in the context of this global structure; and both the second and third phases provide insights.
The relationships associated with links are much richer than most social network analysis recognises: relationships can be of qualitatively different types (friend, colleague), directed (so the intensity one way is different from the intensity the other), signed (both positive and negative, friend or enemy), and dynamic (changing with time). All of these extra properties, and their combinations, can be modelled using a single scheme, creating an expanded social network, which can then be analysed in conventional ways (for example, by spectral embedding).
Social network analysis is especially useful in adversarial settings (where the interests of those doing the modelling and [some of] those being modelled are not aligned) because each individual cannot control much of the global structure. I will illustrate how this pays off in law enforcement and counterterrorism settings.
The internet and modern technology enables us to communicate and interact at lightening speed and across vast distances. The communities and markets created by this technology must make collective decisions, allocate scarce resources, and understand each other quickly, efficiently, and often in the presence of noisy communication channels, ever changing environments, and/or adversarial data. Many theoretical results in these areas are grounded on worst case assumptions about agent behavior or the availability of resources. Transitioning theoretical results into practice requires data driven analysis and experiment as well as novel theory with assumptions based on real world data. I'll discuss recent work that focus on creating novel algorithms for including a novel, strategyproof mechanism for selecting a small subset of winners amongst a group of peers and algorithms for resource allocation with applications ranging from reviewer matching to deceased organ allocation. These projects require novel algorithms and leverage data to perform detailed experiments as well as creating open source tools.
Nicholas Mattei is a Research Staff Member in the Cognitive Computing Group the IBM TJ Watson Research Laboratory. His research is in artificial intelligence (AI) and its applications; largely motivated by problems that require a blend of techniques to develop systems and algorithms that support decision making for autonomous agents and/or humans. Most of his projects and leverage theory, data, and experiment to create novel algorithms, mechanisms, and systems that enable and support individual and group decision making. He is the founder and maintainer of PrefLib: A Library for Preferences; the associated PrefLib:Tools available on Github; and is the founder/co-chair for the Exploring Beyond the Worst Case in Computational Social Choice (2014 - 2017) held at AAMAS.
Nicholas was formerly a senior researcher working with Prof. Toby Walsh in the AI & Algorithmic Decision Theory Group at Data61 (formerly known as the Optimisation Group at NICTA). He was/is also an adjunct lecturer in the School of Computer Science and Engineering (CSE) and member of the Algorithms Group at the University of New South Wales. He previously worked as a programmer and embedded electronics designer for nano-satellites at NASA Ames Research Center. He received his Ph.D from the University of Kentucky under the supervision of Prof. Judy Goldsmith in 2012.
Refreshments served at 3:30 p.m.
Reception at 3:30pm
Emerging real-world graph problems include: detecting community structure in large social networks; improving the resilience of the electric power grid; and detecting and preventing disease in human populations. Unlike traditional applications in computational science and engineering, solving these problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for additional research on scalable algorithms and development of frameworks for solving these problems on high performance computers, and the need for improved models that also capture the noise and bias inherent in the torrential data streams. In this talk, the speaker will discuss the opportunities and challenges in massive data-intensive computing for applications in computational science and engineering.
David A. Bader is Professor and Chair of the School of Computational Science and Engineering, College of Computing, at Georgia Institute of Technology. He is a Fellow of the IEEE and AAAS and served on the White House's National Strategic Computing Initiative (NSCI) panel. Dr. Bader served as a board member of the Computing Research Association, on the NSF Advisory Committee on Cyberinfrastructure, on the Council on Competitiveness High Performance Computing Advisory Committee, on the IEEE Computer Society Board of Governors, on the Steering Committees of the IPDPS and HiPC conferences, and as editor-in-chief of IEEETransactions on Parallel and Distributed Systems, and is a National Science Foundation CAREER Award recipient. Dr. Bader is a leading expert in data sciences. His interests are at the intersection of high-performance computing and real-world applications, including cybersecurity, massive-scale analytics, and computationalgenomics, and he has co-authored over 210 articles in peer-reviewed journals and conferences. During his career, Dr. Bader has served as PI/coPI of over $180M of competitive awards. Dr. Bader has served as a lead scientist in several DARPA programs including High Productivity Computing Systems (HPCS) with IBM PERCS, Ubiquitous High Performance Computing (UHPC) with NVIDIA ECHELON, Anomaly Detection at Multiple Scales (ADAMS), Power Efficiency Revolution For Embedded Computing Technologies (PERFECT), and Hierarchical Identify Verify Exploit (HIVE). He has also served as Director of the Sony-Toshiba-IBM Center of Competence for the Cell Broadband Engine Processor. Bader is a co-founder of the Graph500 List for benchmarking "Big Data" computing platforms. Bader is recognized as a "RockStar" of High Performance Computing by InsideHPC and as HPCwire's People to Watch in 2012 and 2014. Dr. Bader also serves as an associate editor for several high impact publications including IEEE Transactions on Computers, ACM Transactions on Parallel Computing, and ACM Journal of Experimental Algorithmics. He successfully launched his school's Strategic Partnership Program in 2015, whose partners include Accenture, Booz Allen Hamilton, Cray, IBM, Keysight Technologies, LexisNexis, Northrop Grumman, NVIDIA, and Yahoo; as well as the National Security Agency, Sandia National Laboratories, Pacific Northwest National Laboratory, and Oak Ridge National Laboratory.
In over 200 years since Ada Lovelace's birth, she has been celebrated, neglected, and taken up as a symbol for any number of causes and ideas. This talk traces some paths the idea of Lovelace and her imagination of Charles Babbage's Analytical Engine has taken. In particular, we focus on music and creativity, after Lovelace's idea that 'the engine might compose elaborate and scientific pieces of music of any degree of complexity or extent'.
This work began at a symposium in 2015 to mark the 200th anniversary of Lovelace's birth. Through collaborations with Pip Willcox (head of the Centre for Digital Scholarship at the Bodleian Libraries), composer Emily Howard (Royal Northern College of Music), and mathematician Ursula Martin, we have conducted a series of experiments and demonstrations inspired by the work of Lovelace and Babbage. These include simulations of the Analytical Engine, use of a web-based music application, construction of hardware, reproduction of earlier mathematical results using contemporary computational methods, and a musical performance based on crowd sourced algorithmic fragments.
Recently we have introduced Charles Wheatstone to our experiments: Wheatstone deployed an electric telegraph in the same period, but could an electromechanical computer have been constructed? These digital experiments bring insight and engagement with historical scenarios. Our designed digital artefacts can be viewed as design fictions, or as critical works explicating our interpretation of Lovelace’s words: digital prototyping as a means of close-reading. We frame this as Experimental Humanities.
David De Roure is a Professor of Computer Science in the School of Electronics and Computer Science at the University of Southampton, UK. A founding member of the School's Intelligence, Agents, Multimedia Group, he leads the e-Research activities and is Director of the Pervasive Systems Centre. David's work focuses on creating new research methods in and between multiple disciplines, particularly through the codesign of new tools. His projects draw on Web 2.0, Semantic Web, workflow and scripting technologies; he pioneered the Semantic Grid initiative and is an advocate of Science 2.0. He is closely involved in UK e-Science and e-Social Science programmes in projects including myExperiment, CombeChem, myGrid, LifeGuide, e-Research South and the Open Middleware Infrastructure Institute. David has worked for many years with distributed information systems and distributed programming languages, with leading roles in the Web, hypertext and Grid communities. He is a Scientific Advisory Council member of the Web Science Research Initiative and a Fellow of the British Computer Society.
The control network design consists mainly of two steps: input/output (I/O) selection and control configuration (CC) selection. The first one is devoted to the problem of computing how many actuators/sensors are needed and where should be placed in the plant to obtain some desired property. Control configuration is related to the decentralized control problem and is dedicated to the task of selecting which outputs (sensors) should be available for feedback and to which inputs (actuators) in order to achieve a predefined goal. The choice of inputs and outputs affects the performance, complexity and costs of the control system. Due to the combinatorial nature of the selection problem, an efficient and systematic method is required to complement the designer intuition, experience and physical insight.
Motivated by the above, this presentation addresses the structure control system design taking explicitly into consideration the possible application to large - scale systems. We provide an efficient framework to solve the following major minimization problems: i) selection of the minimum number of manipulated/measured variables to achieve structural controllability/observability of the system, and ii) selection of the minimum number of measured and manipulated variables, and feedback interconnections between them such that the system has no structural fixed modes. Contrary to what would be expected, we showed that it is possible to obtain the global solution of the aforementioned minimization problems in polynomial complexity in the number of the state variables of the system. To this effect, we propose a methodology that is efficient (polynomial complexity) and unified in the sense that it solves simultaneously the I/O and the CC selection problems. This is done by exploiting the implications of the I/O selection in the solution to the CC problem.
This talk is based on the newly-released biography of Claude Shannon—one of the foremost intellects of the twentieth century and the architect of the Information Age, whose insights stand behind every computer built, email sent, video streamed, and webpage loaded. Claude Shannon was a groundbreaking polymath, a brilliant tinkerer, and a digital pioneer. He constructed the first wearable computer, outfoxed Vegas casinos, and built juggling robots. He also wrote the seminal text of the digital revolution, which has been called “the Magna Carta of the Information Age.”
Jimmy Soni has served as an editor at The New York Observer and the Washington Examiner and as managing editor of Huffington Post. He is a former speechwriter, and his written work and commentary have appeared in Slate, The Atlantic, and CNN, among other outlets. He is a graduate of Duke University. With Rob Goodman, he is the award-winning coauthor of Rome’s Last Citizen: The Life and Legacy of Cato, Mortal Enemy of Caesar, and A Mind at Play: How Claude Shannon Invented the Information Age. A Mind at Play was awarded the 2017 Neumann Prize from the British Society for the History of Mathematics for the book that best explains mathematical history for a general audience.
The phenomenon of spreading is very broad: it can mean spreading of electromagnetic waves, wind-blown seeds, diseases or information. Spreading can also happen in various environments, from simple spatial (like waves on water) to complicated networks (like information in society). Quite often, it is evident there is a spread, but it is not directly known where it came from. In case of physical phenomena, that feature constant-velocity spreading in space, it may require simple triangulation to pinpoint the source. But if the process happens in a complex network and it also has nature so complex as to be only possible to describe in a stochastic manner (such as epidemics or information spreading) then finding a source becomes a more complicated problem.
Both epidemics and information spreading share a common property - there is no total mass, energy, count, or volume conserved, as is for example in spreading waves (energy) or diffusing substances (mass/volume). Because of that, both can be modeled in similar way, for example by a basic SI (Susceptible-Infected) model. The presentation will describe some existing methods to find sources of spread in such processes and focus on a method based on maximum likelihood introduced by Pinto et.al. as well as describe derivative methods that feature much better performance, as well as improvements in accuracy.
Assume we have a network, where information or epidemic has spread, and we only know when it arrived in some specific points in the network (which we call observers). Further assumptions are that spreading always happens along shortest paths from source to any node, and that the delays on links can be approximated by normal distribution. It is then possible for each potential source in the network (every node in general) to calculate expected arrival times as well as the variances and covariance of these times. For each node we therefore have a multivariate normal distribution of arrival times at all observers. Comparing the distributions with actual observation, we can find the source that has distribution best fitting observed arrival times.
One of drawbacks of such method is that for large number N of nodes in network, the computational costs gets prohibitive, with up to O(N4) complexity. We have proposed a derivative method that limits the considered observers, as well as smart choice of suspected nodes. We only take into account ~ observers that are closest to real source (have earliest observed time of infection), greatly decreasing computational complexity. Instead of calculating distribution for every node, we start with closest observer and follow a gradient of increasing likelihood. These changes not only greatly increase performance (complexity at worst O(N2log(N)) but also increase accuracy in scale-free network topologies.
Krzysztof Suchecki is an assistant professor at Warsaw University of Technology, Faculty of Physics. He has MSc and PhD degrees in physics. His research topics focus on dynamics of and on complex networks, such as Ising and voter model dynamics, including co-evolution of network structure and dynamical node states. His current research is focused on spreading of information in networks and methods to identify sources of information.
enabled this expansion, and how an increasing Community of researchers have made it possible.
Julian Dolby is a Research Staff Member at the IBM Thomas J. Watson Research Center, where he works on a range of topics, including static program analysis, software testing, concurrent programming models and the semantic Web. He is one of the primary authors of the publically available Watson Libraries for Analysis (WALA) program analysis infrastructure, and his recent WALA work has focused on creating the WALA Mobile infrastructure.
Complex systems typically display a modular structure, as modules are easier to assemble than the individual units of the system, and more resilient to failures. In the network representation of complex systems, modules, or communities, appear as subgraphs whose nodes have an appreciably larger probability to get connected to each other than to other nodes of the network. In this talk I will address three fundamental questions: How is community structure generated? How to detect it? How to test the performance of community detection algorithms? I will show that communities emerge naturally in growing network models favoring triadic closure, a mechanism necessary to implement for the generation of large classes of systems, like e.g. social networks. I will discuss the limits of the most popular class of clustering algorithms, those based on the optimization of a global quality function, like modularity maximization. Testing algorithms is probably the single most important issue of network community detection, as it implicitly involves the concept of community, which is still controversial. I will discuss the importance of using realistic benchmark graphs with built-in community structure, as well as the role of metadata.
The next generation of complex engineered systems will be endowed with sensors and computing capabilities that enable new design concepts and new modes of decision-making. For example, new sensing capabilities on aircraft will be exploited to assimilate data on system state, make inferences about system health, and issue predictions on future vehicle behavior---with quantified uncertainties---to support critical operational decisions. However, data alone is not sufficient to support this kind of decision-making; our approaches must exploit the synergies of physics-based predictive modeling and dynamic data. This talk describes our recent work in adaptive and multifidelity methods for optimization under uncertainty of large-scale problems in engineering design. We combine traditional projection-based model reduction methods with machine learning methods, to create data-driven adaptive reduced models. We develop multifidelity formulations to exploit a rich set of information sources, using cheap approximate models as much as possible while maintaining the quality of higher-fidelity estimates and associated guarantees of convergence."
Karen E. Willcox is Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. She is also Co-Director of the MIT Center for Computational Engineering and formerly the Associate Head of the MIT Department of Aeronautics and Astronautics. Before joining the faculty at MIT, she worked at Boeing Phantom Works with the Blended-Wing-Body aircraft design group. Willcox is currently Co-director of the Department of Energy DiaMonD Multifaceted Mathematics Capability Center on Mathematics at the Interfaces of Data, Models, and Decisions, and she leads an Air Force MURI on optimal design of multi-physics systems. She is also active in education innovation, serving as co-Chair of the MIT Online Education Policy Initiative, co-Chair of the 2013-2014 Institute‑wide Task Force on the Future of MIT Education, and lead of the Fly-by-Wire project developing blended learning technology as part of the Department of Education's First in the World program.
A central theme in mechanism design is understanding the tradeoff between simplicity and optimality of the designed mechanism. An important and challenging task here is to design simple multi-item mechanisms that can approximately optimize the revenue, as the revenue-optimal mechanisms are known to be extremely complex and thus hard to implement. Recently, we have witnessed several breakthroughs on this front obtaining simple and approximately optimal mechanisms when the buyers have unit-demand (Chawla et. al. '10) or additive (Yao '15) valuations. Although these two settings are relatively similar, the techniques employed in these results are completely different and seemed difficult to extend to more general settings. In this talk, I will present a principled approach to design simple and approximately optimal mechanisms based on duality theory. Our approach unifies and improves both of the aforementioned results, and extends these simple mechanisms to broader settings, i.e. multiple buyers with XOS valuations.
Based on joint work with Nikhil R. Devanur, Matt Weinberg and Mingfei Zhao.
Yang Cai is a William Dawson Assistant Professor of Computer Science at McGill University. He received his PhD in computer science from MIT, advised by Costis Daskalakis. He was a postdoctoral researcher in UC Berkeley. Yang’s research interests lie in the area of theoretical computer science, in particular algorithmic game theory, online algorithms and logic.
Abstract: Although the concept of ransomware is not new (i.e., such attacks date back at least as far as the 1980's), this type of malware has recently experienced a resurgence in popularity. In fact, in 2014 and 2015, a number of high-profile ransomware attacks were reported, such as the large-scale attack against Sony that prompted the company to delay the release of the film, "The Interview". Ransomware typically operates by locking the desktop of the victim to render the system inaccessible to the user, or by encrypting, overwriting, or deleting the user's files. However, while many generic malware detection systems have been proposed, none of these systems have attempted to specifically address the ransomware detection problem. In this keynote, I talk about some of the trends we are seeing in ransomware. Then, I present a novel dynamic analysis system called UNVEIL that is specifically designed to detect ransomware. The key insight of the analysis is that in order to mount a successful attack, ransomware must tamper with a user's files or desktop. UNVEIL automatically generates an artificial user environment, and detects when ransomware interacts with user data. In parallel, the approach tracks changes to the system's desktop that indicate ransomware-like behavior. Our evaluation shows that UNVEIL significantly improves the state of the art, and is able to identify previously unknown evasive ransomware that was not detected in the anti-malware industry.
Bio: Engin Kirda holds the post of professor of computer science at Northeastern University in Boston. Before that, he held faculty positions at Institute Eurecom in the French Riviera and the Technical University of Vienna, where he co-founded the Secure Systems Lab that is now distributed over five institutions in Europe and the United States. Professor Kirda's research has focused on malware analysis (e.g., Anubis, Exposure, and Fire) and detection, web application security, and practical aspects of social networking security. He co-authored more than 100 peer-reviewed scholarly publications and served on the program #committees of numerous well-known international conferences and workshops. Professor Kirda was the program chair of the International Symposium on Recent Advances in Intrusion Detection (RAID) in 2009, the program chair of the European Workshop on Systems Security (Eurosec) in 2010 and 2011, the program chair of the well-known USENIX Workshop on Large Scale Exploits and Emergent Threats in 2012, and the program chair of security flagship conference Network and Distributed System Security Symposium (NDSS) in 2015. In the past, Professor Kirda has consulted the European Commission on emerging threats, and recently gave a Congressional Breifing in Washington, D.C. on advanced malware attacks and cyber-security. He also spoke at SXSW Interactive 2015 about "Malware in the Wild" and at Blackhat 2015. Besides his roles at Northeastern, Professor Kirda is a co-founder of Lastline Inc., a Silicon-Valley based company that specializes in the detection and prevention of advanced targeted malware.
Abstract: Driven by modern applications and the abundance of empirical network data, network science aims at analysing real-world complex systems arising in the social, biological and physical sciences by abstracting them into networks (or graphs). The size and complexity of real networks has produced a deep change in the way that graphs are approached, and led to the development of new mathematical and computational tools. In this talk, I will present a data-driven network-based methodology to approach data analysis problems arising in a variety of contexts while highlighting state of the art network models including spacial, multilayer, interdependant and modular networks. I will describe different stages in the analysis of data starting from (1) network representations of urban transportation systems, then (2) inference of structural patterns in networks and phase transitions in their detectability, and finally (3) understanding the implications of structural features to dynamical processes taking place on networks. I will conclude with a discussion of the future directions of my work and its intersection with complexity theory, machine learning and statistics.
Bio: I received my Bachelors of Science degree, cum laude, in Mathematics and Computer Science (double degree) from the Israel Institute of Technology (Technion) in 2008. I then shifted my focus towards acquiring Industrial experience and worked as a Software Engineer at Diligent Technologies, an Israel startup that was acquired by IBM. I completed my PhD in Computer Science at the University of St Andrews, UK under the guidance of Simon Dobson in 2014. During my PhD I was supported bya full prize scholarship from the Scottish Informatics and COmputer Science Alliance (SICSA). In 2014 I joined Peter Mucha's group in the Department of Mathematics at the University of North Carolina at Chapel Hill as a Postdoctoral Scholar. My research has been focused on the development of mathematical and computational tools for modeling and analyzing complex systems, roughly defined as large networks of simple components with no central control that give rise emergent complex behavior. I believe that looking at data through a "network lens" provides us with useful perspectives on diverse problems, from designing optimal transportation systems and smart cities to clustering high-demensional data. I am constantly looking for new datasets on which I can apply my "network machinery" to solve real-world problems and to inspire the development of new methodologies.
Abstract: Thanks to Information Technologies, online user behaviors are broadly recorded at an unprecedented level. This gives us an opportunity for getting insights into human behaviors and our societies from real data of large scale enough that makes manual analysis and inspection completely impractical in building intelligent and trustworthy systems. In this talk I will discuss about research problems, challenges, priciples, and methodologies of developing network-based computational models for behavioral analysis. Specifically, I will present recent approaches on (1) modeling user behavior intentions with knowledge from social and behavioral sciences for behavior prediction, recommendation, and suspicious behavioral detection, (2) modeling social and spatiotemporal information for knowledge from behavioral contexts, (3) Structuring behavioral content into information networks of entities and attributes, and (4) integrating structured and unstructured behavior data to support decision-making and information systems. Two results, CatchSync and MetaPAD, will be presented in details. I will conclude with my thoughts on future directions.
Bio: Dr. Meng Jiang is a postdoctoral researcher in University of Illinois at Urbana-Champaign. He received his Ph.D. from the Department of Computer Science at Tsinghua University in 2015. He obtained his bachelor degree from the same department in 2010. He visited Carnegie Mellon University in 2013 and vistited University of Maryland, College Park in 2016. Find more about him here: http://www.meng-jiang.com.
His research lies in the field of data mining, focusing on user behavior modeling. He has delivered two book chapters and two conference tutorials on this topic. His Ph.D. thesis won the Dissertation Award at Tsinghua. His work on "Suspicious Behavior Detection" was selected as one of the Best Paper Finalists in KDD '14. His work on "Social Contextual Recommendation" has been deployed in the Tencent social network. The package of his work on "Attribute Discovery from Text Corpora" is transferring to U.S. Army Research Lab. He has published 20 papers with 560+ citations, including 15 papers with 360+ citations as the first author.
Abstract: Complex systems exist in almost every aspect of science & technology. My research question focuses on how to understand, predict, control, and ultimately survive real-world complex systems in an ever-changing world facing the global challenges of climate change, weather extremes, and other natural and human distasters. I will present three recent works in the field of network science and complex systems: resilience, robustness, and control. (I) Resilience, a system's ability to adjust its activity to retain its basic functionality when errors and environmental changes occur, is a defining property of many complex systems. I will show a set of analytical tools with which to identify the natural control and state parameters of a multi-dementional complex system, helping us derive an effective one-dimensional dynamics that accurately predicts the system's resilience. The analytical results unveil the network characteristics that can enhance or diminish resilience, offering ways to prevent the collapse of ecological, biological, or economic systems, and guiding the design of technological systems resilient to both internal failures and environmental changes. (II) Increasing evidence shows that real-world systems interact with one another, and the real goal in network science shouldn't just understand individual networks, but deciphering the dynamical interactions in networks of networks (NONs). Malfunction of a few nodes in one network layer can cause cascading failures and catastrophic collapse of the entire system. I will show the general theoretical framework for analyzing the robustness of and cascading failures in NONs. The results of NONs have been surprisingly rich, and they differ from those of single networks that they present a new paradigm. (III) Controlling complex networks is the ultimate goal of understanding the dynamics of them. I will present a k-walk theory and greedy algorithm for target control of complex networks. Extending from the three aspects of current research, I will describe the two future research directions: universality of competitive dynamics, and control and recovery of damaged complex systems.
Bio: Dr. Jianxi Gao is a research assistant professor in the Center for Complex Network Research at Northeastern University. Dr. Gao received his Ph.D. degree at Shanghai Jiao Tong University from 2008 - 2012. During his Ph.D. he was a visiting scholar at Prof. H. Eugene Stanley's lab at Boston University from 2009 - 2012. Dr. Gao's major contribution includes the theory for robustness of networks of networks and resilience of complex networks. Since 2010, Dr. Gao has published over 20 journal papers in Nature, Nature Physics, Nature COmmunications, Proceedings of the National Academy of Sciences, Physical Review Letters and more, with over 15 hundred citations on Google SCholar. Dr. Gao has been selected as the Editor board of Nature Scientific Reports, distinguished referee of EPL (2014-2016), and referee of Science, PNAS, PRL, PRX and more. His publications were reported over 20 times by international public and professional media.
Abstract: Whether the people we follow on Twitter can be recommended as our potential friends on Facebook? How is the box office that US movies can acheive in China? How do weather and nearby points-of-interest (POIs) affect the traffic routes planned for vehicles? About the same information entities, a large amount of information can be collected from various sources, each of which provides a specific signature of the entity from a unique underlying aspect. Effective fusion of these different information sources provides an opportunity for understanding the information entities more comprehensively.
My thesis works investigate the priciples, methodologies and algorithms for knowledge discovery across multiple aligned information sources, and evaluate the corresponding benefits. Fusing and mining multiple information sources of large Volumes and diverse Varieties is a fundamental problem in Big Data studies. In this talk, I will discuss about the information fusion and synergistic knowledge discovery works, focusing on online social media, and present my algorithmic works on multi-source learning frameworks together with the evaluation results. I will also provide my future vision of fusion learning for broader real-world applications at the conclusion of the talk.
Bio: Jiawei Zhang is a Ph.D. candidate at the Department of Computer Science at University of Illinois at Chicago (UIC), under the supervision of Prof. Philip S. Yu since August 2012. Prior to joining UIC, he obtained his Bashelor's Degree in Computer Science from Nanjing University, in China. His research interests span the fields of Data Science, Data Mining, Network Mining, and Machine Learning. His research works focus on fusing multiple large-scale information sources of diverse varieties together, and carrying out synergistic data mining tasks across these fused sources in one unified alalytic.
His fusion learning works have appeared in KDD, ICDM, SDM, ECLM/PKDD, IJCAI, WWW, WSDM, CIKM, IEEE Transactions on Knowledge and Data Engineering (TKDE). He receives the Best Student Paper Runner Up Award from ASONAM' 16. He has been serving as the information specialist and director of ACM Transaction on KNowledge Discovery Data (TKDD) since August 2014. His is also the PC member of WWW' 17, KDD' 16, CIKM' 16, CIKM' 15, and AIRS' 16. Besides the academic experience at University Illinois Chicago, he also has industrial research experiences working at Microsoft Research in 2014, IBM T.J. Watson Research Center in 2015.
The prediction and control of the dynamics of networked systems is one of the central problems in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, though the nature and common origins of such laws remain elusive. Do these universal laws exist? We do not have the answer to this question...yet. I will talk about the latent geometry approach to network systems, which, in my opinion, could be a first step toward the formulation of universal laws of network dynamics. In this approach, networks underlying complex systems are viewed as discretizations of smooth geometric splaces. Network nodes are points in these spaces and the probibility of a connection between them. I will start my talk with a motivation and a high level introduction to the latent geometry concept. I will continue with a (semi) rigorous discussion of the mathematics underlying the approach and computational algorithms for uncovering latent geometries of real systems. I will conclude my talk by describing existing and prospective applications of the latent geometry, including Internet interdomain routing, large-scale dynamics of networked systems, human diseases and social dynamics.
Bio: Dr. Kitsak is an associate research scientist in the Department of Physics and the Network Science Institute at Northeastern University. Dr. Kitsak earned Ph.D. in theoreticsl physics from Boston University in 2009 under the direction of Prof. H.E. Stanley. Dr. Kitsak has held postdoctoral positions at the Center for Applied Internet Data Analysis (CAIDA), UC San Diego (2009-2012); and the Center for Complex Network Research (CCNR), Northeastern University (2012-2014). His research focuses on the development of theoretical and computational approaches to networked systems.
Abstract: Probabilistic reasoning lets you predict the future, infer past causes of current observations, and learn from experience. It can be hard to implement a probabilistic application because you have to implement the representation, inference, and learning algorithms. Probabilistic programming makes this much easier by providing an expressive language to represent models as well as inference and learning algorithms that automatically apply to models written in the language. In this talk, I will present the past, present, and future of probabilistic programming and our Figaro probabilistic programming system. I will start with the motivation for probabilistic programming and Figaro. After presenting some basic Figaro concepts, I will introduce several applications we have been developing at Charles River Analytics using Figaro. Finally, I will describe our future vision of providing a probabilistic programming tool that domain experts with no machine learning knowledge can use. In particular, I will present a new inference method that is designed to work well on a wide variety of problems with no user configuration. Prior knowledge of machine learning is not required to follow this talk.
Bio: Dr. Avi Pfeffer is Chief Scientist at Charles River Analytics. Dr. Pfeffer is a leading researcher on a variety of computational intelligence techniques including probabilistic reasoning, machine learning, and computational game theory. Dr. Pfeffer has developed numerous innovative probabilistic representation and reasoning frameworks, such as probabilistic programming, which enables the development of probabilistic models using the full power of programming languages, and statistical relational reasoning. He is the lead developer of Charles River Analyics' Figaro probabilistic programming language. As an Associate Professor at Harvard, he developed IBAL, the first general-purpose probabilistic programming language. While at Harvard, he also produced systems for representing, reasoning about, and learning beliefs, preferences, and decision making strategies of people in strategic situations. Prior to joining Harvard, he invented object-oriented Bayesian networks and probabilistic relational models, which form the foundation of the field of statistical relational learning. Dr. Pfeffer serves as Program Chair of Conference on Uncertainty in Artificial Intelligence. He has published many journal and conference articles and is the author of a text on probabilistic programming. Dr. Pfeffer received his Ph.D. in computer science from Stanford University and his B.A. in computer science from the University of California, Berkeley.
With great progress made in the areas of Cognitive Computing and Human-scale Sensory Research, the paradigm of human computer interaction will be a collaboration between human beings and intelligent machines through a human-scale and immersive experience. What research challenges exists today to enable this new paradigm, how it will help human beings (especially groups of people), what transformation it will bring to us, are all important to us. In this talk, Dr. Su is going to introduce the vision of future Cognitive and Immersive Situations Room - an immersive, interactive, intelligent physical environment, and the research agenda to realize the vision.
Bio: Dr. Hui Su is the Director of Cognitive and Immersive Systems Lab, a collaboration between IBM Research and Rensselaer Polytechnic Institute. He has been a technical leader and an executive at IBM Research. Most recently, he was the Director of IBM Cambridge Research Lab in Cambridge, MA and was responsible for a broad scope of global missions in IBM Research, including Cognitive User Experience, Center for Innovation in Visual Analytics and Center for Social Business. As a technical leader and a researcher for 20 years at IBM Research, Dr. Su has been an expert in multiple areas ranging from Human Computer Interaction, Cloud Computing, Visual Analytics, and Neural Network Algorithms for Image Recognition etc. As an executive, he has been leading research labs and research teams in the U.S. and China.
Dimensionality reduction methods have been used to represent words with vectors in NLP applications since at least the introduction of latent semantic indexing in the late 1980s, but word embeddings developed in the past several years have exhibited a robust ability to map semantics in a surprisingly straightforward manner onto simple linear algebraic operations. These embeddings are trained on cooccurrence statistics and intuitively justified by appealing to the distributional hypothesis of Harris and Firth, but are typically presented in an ad-hoc algorithmic manner. We consider the canonical skip-gram Word2vec embedding, arguably the most well-known of these recent word embeddings, and establish a corresponding generative model that maps the composition of words onto the addition of their embeddings. By virtue of, first, the fact that words can be replaced more generally with arbitrary symbols and, second, natural connections between Word2vec, the classical RC(M) association model for contingency tables, Bayesian PCA decompositions, Poisson matrix completion, and the Sufficient Dimensionality Reduction model of Globerson and Tishby, our results are meaningful in a broad context.