The Mathematical Sciences Colloquium series is held each semester, generally on Mondays at 4pm, and is sponsored by the math department. Faculty in the math department invite speakers from all areas of mathematics, and the talks are open to all members of the RPI community. The calendar is organized by the colloquium chair Rongjie Lai.
In the terahertz frequency range, the effective (complex-valued) surface conductivity of atomically thick 2D materials such as graphene has a positive imaginary part that is considerably larger than the real part. This feature allows for the propagation of slowly decaying electromagnetic waves, called surface plasmon-polaritons (SPPs), that are confined near the material interface with wavelengths much shorter than the wavelength of the free-space radiation.
Big data are often created by aggregating multiple data sources and modeled as large-scale attributed networks. Many applications of big data analytics are concerned of discovering anomalous patterns (subnetworks) that are interesting or unexpected, such as detection of disease outbreaks, subnetwork biomarkers, network intrusions, cyber threats, societal events, among others.
Leaky oil droplets that are self-propelling due to their created concentration gradient form an ideal system for studying collective behavior. I will present a simple model that can be reduced to a system of non-Markov stochastic differential equations, allowing for analytical results that match the observed experimental system. The particles' interactive force is observed through their hovering above a bottom plate and their repelling nature. The model also displays a regime of super-diffusive scaling likely related to the mobility transition to a constant velocity solution (of the deterministic system). The single non-dimensional parameter in the model controls the history of interaction, allowing the system to go from having complete memory to behaving like interacting electrostatic potentials.
All numerical calculations will fail to provide a reliable answer unless the continuous problem under consideration is well posed. Well-posedness depends in most cases only on the choice of boundary conditions. In this paper we will highlight this fact, and exemplify by discussing well-posedness of a prototype problem: the time-dependent compressible Navier–Stokes equations. We do not deal with discontinuous problems, smooth solutions with smooth and compatible data are considered.
In particular, we will discuss how many boundary conditions are required, where to impose them and which form they should have in order to obtain a well posed problem. Once the boundary conditions are known, one issue remains; they can be imposed weakly or strongly. It is shown that the weak and strong boundary procedures produce similar continuous energy estimates. We conclude by relating the well-posedness results to energy-stability of a numerical approximation on summation-by-parts form. It is shown that the results obtained for weak boundary conditions in the well-posedness analysis lead directly to corresponding stability results for the discrete problem, if schemes on summation-by-parts form and weak boundary conditions are used.
The analysis in this paper is general and can without difficulty be extended to any coupled system of partial differential equations posed as an initial boundary value problem coupled with a numerical method on summation-by parts form with weak boundary conditions. Our ambition in this paper is to give a general roadmap for how to construct a well posed continuous problem and a stable numerical approximation, not to give exact answers to specific problems.
There are numerous and diverse challenges associated with analyzing data collected from different fields of science and engineering. This talk consists of two parts. First, time-dependent oscillatory signals occur in a wide range of fields, including geophysics, biology, medicine, finance and social dynamics. Of great interest are techniques that decompose the time-dependent signals into multiple oscillatory components, with time-varying amplitudes and instantaneous frequencies. Such decompositions can help us better describe and quantify the underlying dynamics that govern the system. I will present a new advance in time-frequency representations whose effectiveness is justified by both numerical experiments and theoretical analysis. Second, the high-dimensionality of point cloud data makes investigating such data difficult. Fortunately, these data often locally concentrate along a low-dimensional subspace and this makes the problem more tractable. I will talk about utilizing low-dimensional structures for various data analysis objectives, ranging from recovering the underlying data in the presence of complex noise including Gaussian additive noise and large sparse corruptions, to recognizing subspace-based patterns in data, from robust algorithm design to theoretical analysis. The techniques for learning subspaces have broad applications: image processing, computer vision, bioinformatics, medicine, etc. At the end, I will talk about some future directions where both fields are involved.
Technological advances in data acquiring and storage have led to a rapid proliferation of big data in diverse areas such as Internet search, information technology, healthcare, biology, and engineering. The data involved in many of these applications are large and growing faster than the development of modern computer. This talk present ways to handle these large amount of data. The strategies include sampling small amount of data, breaking variables into small blocks, and performing parallel computing. In the first part, a block stochastic gradient (BSG) method will be introduced. BSG inherits the advantage of both stochastic gradient (SG) and block coordinate descent (BCD) methods, and it performs better than each of them individually. The second part of this talk will present an asynchronous parallel block coordinate update (ARock) method for fixed-point problems, which abstract many applications such as solving linear equations, convex optimization, statistical learning, and optimal control. Compared to its synchronous counterpart, ARock eliminates idle time, reduces memory-access congestion, and has perfect load balance. Numerical results show that ARock can achieve almost linear speed-up while the synchronous methods may suffer from load imbalance and have very bad speed-up.
Low rank models exist in many applications, ranging from signal processing to data analysis. Typical examples include low rank matrix completion, phase retrieval, and spectrally sparse signal reconstruction. We will present a class of computationally efficient algorithms which are universally applicable for those low rank reconstruction problems. Theoretical recovery guarantees will be established for the proposed algorithms under different random models, showing that the sampling complexity is essentially proportional to the intrinsic dimension of the problems rather the ambient dimension. Extensive numerical experiments demonstrate the efficacy of the algorithms.
Inspired by real-world networks consisting of layers that encode different types of connections, such as a social network at different instances in time, we study community structure in multilayer networks. We analyze fundamental limitations on the detectability of communities by developing random matrix theory for the dominant eigenvectors of modularity matrices that encode an aggregation of network layers. Aggregation is often beneficial when the layers are correlated, and it represents a crucial step for the discretization of time-varying network data, whereby layers are binned into time windows. We explore two methods for aggregation: summing the layers? adjacency matrices as well as thresholding this summation at some value. We develop theory for both large- and small-scale communities and analyze detectability phase transitions that are onset by varying either the density of within-community edges or community size. We identify layer-aggregation strategies that are optimal in that they minimize the detectability limit. Our results indicate good practices in the context of community detection for how to aggregate network layers, threshold pairwise-interaction data matrices, and discretize time-varying network data. We apply these results to synthetic and empirical networks, including a study of anomaly detection for the Enron email corpus.
Photo-acoustic tomography (PAT) and thermo-acoustic tomography (TAT) are novel hybrid modalities in medical imaging. A hybrid modality combines a high-resolution physical phenomenon and a high-contrast one in an aim to preserve the advantages of both. In PAT and TAT, high-resolution ultrasound wave is coupled with high-contrast optical or electromagnetic wave through the photo-acoustic effect. The study of their mathematical models can be divided into two steps. The first step concerns recovery of the radiation absorbed by tissues from the boundary measurement of ultrasound signals. This amounts to solving an inverse source problem for the acoustic wave equation. The second step consists of recovering optical or electromagnetic parameters of tissues from the absorbed radiation. This leads to inverse problems with internal measurement. In this talk, we will discuss the models underlying PAT and TAT and obtain several results concerning uniqueness, stability, and reconstructive procedures of these inverse problems.
In this talk, I propose several efficient, reliable, and practical computational algorithms to solve challenging optimization problems arising in medical imaging and image processing. These problems are non-differentiable, and ill-conditioned, non-convex, and/or highly nonlinear, that traditional sub-gradient based methods converge very slowly. To tackle the computational complexities, I use relaxation and approximation techniques. In addition, I exploit splitting variables and alternating direction method of multipliers to decouple the original challenging problems into subproblems which are easier to solve. To obtain fast results, I develop innovative line search strategies and solve the subproblems by Fourier transforms and shrinkage operators. I present the analytical properties of these algorithms as well as various numerical experiments on parallel Magnetic Resonance imaging, image inpainting, and image colorization. The comparison with some existing state-of-art methods are given to show the efficiency and the effectiveness of the proposed methods.
LIGO’s detection of gravitational waves from a binary black hole merger inaugurates a completely new mode of observational astronomy and represents the culmination of a quest lasting half a century. After a brief review of gravitational waves in general relativity, I will discuss the detection itself. How do the LIGO instruments work? How do we know the signal was caused by a binary black hole merger? What does this detection tell us about binary black holes? Then I will focus on how this moment came to pass. The detection required many ingredients to be in place including (1) developments in theoretical relativity to allow proof that gravitational waves were not coordinate artifacts; (2) a bold vision to recognize that gravitational wave detection was not impossible; (3) technological developments of novel vacuum systems, lasers, optical coatings, active seismic isolation, etc.; (4) the successful conclusion of a 35 year effort to simulate binary black holes on the computer; (5) development of sophisticated, new data analysis methods to tease a waveform from noisy data; (5) the growth of the field of gravitational wave science from a handful of practitioners to the more than 1000 authors on the detection paper; and finally (6) the (nearly) unwavering support of the National Science Foundation. The first detection was followed by a second one in this first "science run" and soon another science run will begin. I will end with discussion of the future — more binary black holes, other sources of gravitational waves and what we might learn, instrument upgrades, new facilities — and other ways to detect gravitational waves — from space and from monitoring millisecond pulsars.
Wave breaking in deep oceans is a challenge that still defies complete scientific understanding. Sailors know that at wind speeds of approximately 5m/sec, the random looking windblown surface begins to develop patches of white foam ('whitecaps') near sharply angled wave crests. We idealize such a sea locally by a family of close to maximum amplitude Stokes waves and show, using highly accurate simulation algorithms based on a conformal map representation, that perturbed Stokes waves develop the university feature of an overturning plunging jet. We analyze both the cases when surface tension is absent and present. In the latter case, we show the pluning jet is regularized by capillary waves which rapidly become nonlinear Crapper waves in whose trough pockets whitecaps may be spawned.
The "particle-in-cell" (PIC) method is a technique for solving kinetic PDEs that has been a standard simulation tool in plasma physics for 50 years. Originally, the method was an attempt to circumvent the curse of dimensionality when solving high-dimensional kinetic PDEs by combining particle- and grid-based representations. The technique has been enormously successful in many regards but even today, generating a quantitatively accurate solution in complex, three-dimensional geometry requires many hours on a massively parallel machine.
Two prominent reasons for the massive complexity of PIC schemes are the statistical noise introduced by the particle representation and the fact that multiple disparate physical time-scales necessitate taking enormous numbers of time-steps. We present approaches to circumventing each of these difficulties. First, we propose the use of 'sparse grids' (see e.g. Griebel et al, 1990) to estimate grid-based quantities from particle information. We show that this can dramatically reduce statistical errors while only increasing grid-based error by a logarithmic factor. Second, we present a multilevel - in time - technique in the spirit of the multilevel Monte Carlo (MLMC) method (see e.g. Giles, 2008). The idea is to combine information from simulations using many particles and a large time step on the one hand with simulations using few particles and a small time step on the other. This is done in such a way as to generate a new solution that mimics one with many particles and a small time-step, but at dramatically reduced cost. Scalings of the computational complexity of PIC codes using each of these approaches will be discussed, and proof-of-principle results will be presented from solving the 4-D Vlasov-Poisson PDE. Finally, we will discuss the prospects for
combining the two approaches, parallel issues, and other future directions.
Computational nanophotonics is one of the central tools of the science of light and photonic device engineering. It plays a crucial role in enabling optical technologies ranging from bio-sensing to quantum information processing. Up to the present, a plethora of various techniques and commercial software founded on conventional computational electromagnetics methods havebeen developed. After a brief review of previous work based on the innovative methods of transformation optics, I will present a new class of elliptic omnidirectional concentrators focusing light on a disk, a thin strip, or a rod. This study expands the theory of a circular omnidirectional concentrator—an ‘optical black hole’—previously developed by our team, and then experimentally demonstrated at the microwave, at optical spectral bands, and in acoustics. Our ray-tracing and full-wave simulations of new elliptic designs show flawless focusing and absorbing performance at complete acceptance angles.
Kernel-based non-linear dimensionality reduction methods, such as Local Linear Embedding (LLE) and Laplacian Eigenmaps, rely heavily upon pairwise distances or similarity scores, with which one can construct and study a weighted graph associated with the data set. When each individual data object carries structural details, the correspondence relations between these structures provide additional information that can be leveraged for studying the data set using the graph. In this talk, I will introduce the framework of Horizontal Diffusion Maps (HDM), a generalization of Diffusion Maps in manifold learning. This framework models a data set with pairwise structural correspondences as a fibre bundle equipped with a connection. We further demonstrate the advantage of incorporating such additional information and study the asymptotic behavior of HDM on general fibre bundles.
In a broader context, HDM reveals the sub-Riemannian structure of high-dimensional data sets, and provides a nonparametric learning framework for data sets with structural correspondences. Mre generally, it can be viewed as geometric realization of synchronization problems. A synchronization problem for a group $G$ and a graph $\Gamma=\left(V, E\right)$ searches for an assignment of elements in $G$ to edges of $\Gamma$ so the overall configuration minimizes an energy functional under certain compatibility constraints; it is essentially a generalization to the non-commutative setting of the little Grothendieck problem. In this talk, I will also explain some recent work on the cohomological nature of this type of problems.
Our interest in synchronization and diffusion geometry arises from the emerging field of automated geometric morphometrics. At present, evolutionary anthropologist using physical traits to study evolutionary relationships among living and extinct animals analyze morphological data extracted from carefully defined anatomical landmarks. Identifying and recording these landmarksis time consuming and can be done accurately only by trained morphometricians. This necessity renders these studies inaccessible to non-morphologists and causes phenomics to lag behind genomics in elucidating evolutionary patterns. This talk will also cover the application of our work to the automation of this morphological analysis in a landmark-free manner.
Nonlinear evolution PDEs are a central topic in mathematical research, not only due to their inner beauty and complexity but also thanks to their broad range of real-world applications, from physics and biology to finance and economics. The first part of this talk is devoted to a new approach develop- ed in collaboration with A.S. Fokas and A. Himonas for the well-posedness of initial-boundary value problems for such PDEs in one spatial dimension. In particular, it is shown that the nonlinear Schrödinger (NLS) and the Korteweg-de-Vries (KdV) equations are well-posed on the half-line with data in appropriate Sobolev spaces. The second part of the talk is concerned with the initial value problem for a nonlocal, nonlinear evolution PDE of Camassa- Holm type with cubic nonlinearity, which is integrable, admits periodic and non-periodic multi-peakon traveling wave solutions, and can be derived as a shallow water approximation to the celebrated Euler equations. Finally, the third part of the talk addresses a long-standing open question, namely the nonlinear stage of modulational instability (a.k.a. Benjamin-Feir instability), which is one of the most ubiquitous phenomena in nonlinear science. For all those physical systems governed by the focusing NLS equation, a precise characterization of the nonlinear stage of modulational instability is obtained by computing explicitly the long-time asymptotic behavior of the relevant initial value problem formulated with nonzero boundary conditions at infinity.”
Many physical systems admit mathematical models from contact geometry, and symmetries of the corresponding geometric structure provide the modeler with insights that can be obtained in no other way. In this talk I will introduce contact geometry through a selection of examples arising from fluid mechanics, Hamiltonian dynamics, and Riemannian geometry. Finally because contact geometry is defined using the language of differential forms, it may seem appropriate for only those problems that admit smooth formulations; however if time permits I will also explain the extension of smooth contact dynamics to topological dynamics.
The relatively recent introduction of viscosity solutions and the Barles-Souganidis convergence framework have allowed for considerable progress in the numerical solution of fully nonlinear elliptic equations. Convergent, wide-stencil finite difference methods now exist for a variety of problems. However, these schemes are defined only on uniform Cartesian meshes over a rectangular domain. We describe a framework for constructing convergent meshfree finite difference approximations for a class of nonlinear elliptic operators. These approximations are defined on unstructured point clouds, which allows for computation on non-uniform meshes and complicated geometries. Because the schemes are monotone, they fit within the Barles-Souganidis convergence framework and can serve as a foundation for higher-order filtered methods. We present computational results for several examples including problems posed on random point clouds, computation of convex envelopes, obstacles problems, Monge-Ampere equations, and non-continuous solutions of the prescribed Gaussian curvature equation.
Most of dynamical processes are continuous, whereas in experiment, signals are often measured in the form of discrete spatiotemporal series and conclusions are drawn by analyzing these sampled signals. In this talk, I will illustrate two examples to show how different samplings may lead to artifact of data processing and provide corresponding approaches to extract the intrinsic properties from the underlying continuous processes. The first example is about analyzing spatiotemporal activities measured by voltage-sensitive-dye-based optical imaging in the primary visual cortex of the awake monkey. Through computational modeling, we show that our model can well capture the phenomena observed in experiment and can separate them from those statistical effects arising from spatial averaging procedures in experiment. The second example is about analyzing Granger causality for information flow within continuous dynamical processes. We show that different sampling rate may potentially yield incorrect causal inferences and such sampling artifact can be present for both linear and nonlinear processes. We show how such hazards lead to incorrect network reconstructions and describe a strategy to obtain a reliable Granger causality inference.
Oscillations in the brain are associated with learning, memory, and other cognitive functions. Evidence shows that inhibitory neurons play an important role in brain oscillations. Yet, how various types of inhibitory neurons contribute to the generation of oscillations remains unclear. Here we address the issue of what mathematical tools can be used to reveal information flow accompanying oscillations in the brain. By recording inhibitory neurons in the hippocampus of freely behaving mice and using time-delayed mutual information, we identify two classes of inhibitory neurons whose firing activities share high mutual information with the slow theta-band (4-12 Hz) and the fast ripple-band (100-250 Hz) of local field potential, respectively. Information flow direction further suggests their distinct contribution to theta and ripple oscillations. In contrast, Granger Causality analysis fails here to infer the causality between activities of
inhibitory neurons and hippocampal oscillations.
A large variety of observable phenomena are mathematically described as transitions between metastable states in a system with many degrees of freedom, such as magnetization reversals. Metastability refers the system spending extended periods of time relative to its natural time scale in localized regions of phase space, transiting infrequently between them. As a toy system for a nanomagnet, I investigate a Langevin equation which limits to a stochastic partial differential equation as the dimension goes to infinity. Consistent with an energy barrier viewpoint, I show how time-scale separation averaging can be used to describe mean transition times in a low-dimensional, low-damping, regime. For the infinite dimensional system, I show how metastability can be explained by an entropic barrier in phase space, despite transition times remaining exponential with the energy barrier height. The difference lies in the prefactor in front of the exponential term, and depend on an effective dimension of the system.
As events of the past decade have tragically demonstrated, tsunamis pose a major risk to coastal populations around the world. Numerical modeling is an important tool in better understanding past tsunamis and their geophysical sources, in real-time warning and evacuation, and in assessing hazards and mitigating the risk of future tsunamis. I will discuss a variety of techniques from adaptive mesh refinement to probabilistic hazard analysis that are being used for tsunamis and related geophysical hazards.
Cancer is a class of diseases that are characterized by abnormal cell growth and the ability to spread to other parts of the body. Different combinations of genetic mutations cause different types of cancer, and identifying the combinations of mutations responsible for cancer is essential for finding more effective treatments. Identifying these mutations, which necessitates separating driver mutations from a much larger number of passenger mutations, is a difficult task. However, the advent of inexpensive next-generation sequencing techniques, coupled with the development of novel algorithms that incorporate areas of biology, computer science, and mathematics, provides the potential for more personalized and more targeted cancer treatments. In this talk, we first briefly review the biology of cancer. We then overview various computational methods for identifying driver mutations in cancer along with their mathematical motivations. We finally explore our work on a particular computational technique for identifying groups of driver mutations using biological networks and mutation data.
Problems governed by wave propagation span much of the physical phenomena we experience. Thus the development of better tools for simulating waves has the potential for significant impact. Crucial components of an effcient time-domain solver are robust high-resolution volume discretizations applicable in complex geometry. Our focus is on high-order energy stable volume discretization methods applicable on hybrid grids. In particular we will discuss a new formulation of upwind discontinuous Galerkin methods for wave equations in second order form, Galerkin methods on structured grids, and methods built from Hermite interpolation.
For large scale nonsmooth convex optimization problems, first order methods involving only the subgradients are usually used thanks to their scalability to the problem size. Douglas-Rachford (DR) splitting is one of the most popular first order methods in practice. It is well-known that DR applied on dual problem is equivalent to the widely used alternating direction method of multipliers (ADMM) in nonlinear mechanics and the split Bregman method in image processing community. When DR is applied to convex optimization problems such as compressive sensing, one interesting question of practical use is how the parameters in DR affect the performance. We will show an explicit formula of the sharp asymptotic convergence rate of DR for the simple L1 minimization. The analysis will be verified on examples of processing seismic data in Curvetlet domain. This is a joint work with Prof. Laurent Demanet at MIT
Computer-assisted or automated analysis of atomic-scale resolution image for polycrystalline materials has important applications in characterizing and understanding material micro-structure. In this talk, we will discuss some recent progress in crystal image analysis using 2D synchrosqueezed transforms combined with variational approaches. This talk is based on joint works with Benedikt Wirth, Haizhao Yang and Lexing Ying.