The International Conference on New Trends in Computational and Data Sciences
With the rapid developments in technology, computational and applied mathematics has played an important and unprecedented role in many scientific disciplines. The main objective of this conference is to bring together researchers, students, and practitioners with interest in the theoretical, computational, and practical aspects in scientific computing, optimization, image processing, and data science, areas that Tony Chan has made impactful contributions. In honor of his 70th birthday, this conference will cover recent progress, stimulate new ideas, and facilitate interdisciplinary collaborations. It will emphasize the crucial and unique role of mathematical insights in advanced algorithm design and novel real-world applications.
This conference will be held at Caltech on December 19-21, 2022 (Monday-Wednesday). It includes invited speakers ranging from mathematicians to computer scientists and industrial experts. The meeting will feature distinguished lectures from leaders in the related fields, and panel discussions on future directions. Contributed poster presentations and participation from young scientists and graduate students are strongly encouraged. There will be a conference banquet on Tuesday evening. Registration is required to attend the conference.
Please Note:
- We have reached our registration capacity for the Conference and are no longer accepting additional registrations.
- To ensure that we will not exceed our venue's capacity, we will not accept unregistered attendees at check-in.
- All registered attendees must check in at the Check-In Table upon arrival. The Check-In Table will be located outdoors, immediately North of the Annenberg Center.
Program
The location for all talks will be Annenberg 105 (Auditorium).
Click on the arrows in the schedule below to toggle for more information.
Monday, December 19
Morning Session Chair: Thomas Hou
8:00-8:50am
COVID antigen tests (provided by the Conference) are required upon arrival in order to attend the Conference. Name badges will be provided to attendees upon providing a negative test result to the Conference staff.
8:50-9:00am
--
9:00-9:30am
Derivative free optimization methods are used when the gradient of the objective or loss function is not known or computationally costly. A stochastic component is often added for global convergence when the objective function is not convex in order to escape from local minima. We propose an algorithm with adaptive state dependent variance that can be proved to have global convergence with algebraic rate. This is possible even if in each step not even an approximate gradient information is used. With approximate gradient information the convergence is however faster. Results from numerical tests will be described.
9:30-10:00am
We introduce a new swarm-based (SBGD) method for non-convex optimization. The swarm consists of agents, identified with a position, x, and mass, m. The key to their dynamics is transition of mass from high to lower ground, and a time stepping protocol, h(x,m). Accordingly, we distinguish between 'heavier' leaders, expected to approach local minima with small time steps, and 'lighter' explorers, taking large time steps, and are therefore expected to encounter improved position for the swarm; if they do, then they assume the role of heavy swarm leaders and so on. Convergence analysis and numerical simulations demonstrate the effectiveness of SBGD method as a global optimizer.
10:00-10:30am
Hyperspectral images often have hundreds of spectral bands of different wavelengths captured by aircraft or satellites. Identifying detailed classes of pixels becomes feasible due to the enhancement in spectral and spatial resolution of hyperspectral images. In this work, we propose a novel framework that utilizes both spatial and spectral information for classifying pixels in hyperspectral images. The method consists of three stages. In the first stage, the pre-processing stage, the Nested Sliding Window algorithm is used to reconstruct the original data by enhancing the consistency of neighboring pixels, and then Principal Component Analysis is used to reduce the dimension of data. In the second stage, Support Vector Machines are trained to estimate the pixel-wise probability map of each class using the spectral information from the images. Finally, a smoothed total variation model is applied to smooth the class probability vectors by ensuring spatial connectivity in the images. We demonstrate the superiority of our method against three state-of-the-art algorithms on six benchmark hyperspectral data sets with 10 to 50 training labels for each class. The results show that our method gives the overall best performance in accuracy. Especially, our gain in accuracy increases when the number of labeled pixels decreases. Therefore, our method is of great practical significance since expert annotations are often expensive and difficult to collect.
10:30-11:00am
--
11:00-11:30am
In this talk, I will report some recent development of the design and analysis of neural network (NN) based method, such as PINN and the finite neuron method (FNM), for numerical solution of partial differential equations (PDEs). After first giving an overview on our convergence analysis of FNM, I will focus on the training algorithms for solving the relevant optimization problems. I will present a theoretical result that explains the success as well as the challenges of PINN and FNM that are trained by gradient based methods such as SGD and Adam. I will then present a new class of training algorithms that can theoretically achieve and numerically observe the asymptotic rate of the underlying discretization algorithms (while the gradient based methods cannot). Motivated by our theoretical analysis, I will finally report some competitive numerical results of CNN (and MgNet) using an activation function with compact support for image classifications.
11:30am-12:00pm
Numerical integration of a given dynamic system can be viewed as a forward problem with the learning of hidden dynamics from available observations as an inverse problem. The latter appears in various settings such as model reductions of multiscale processes, and more recently, data-driven modeling via deep/machine learning. The iterative studies of both forward and inverse problems forms the loop of informative and intelligent scientific computing. Some related issues, e.g., the identification of state variables and the selection of numerical methods, are discussed in this lecture. In particular, a question to be investigated is whether a good numerical integrator for discretizing prescribed dynamics is also good for discovering unknown dynamics in association with deep learning.
12:00-2:00pm
Lunch during the Conference will be on your own (not provided by organizers). For on-campus and other local dining options, please visit "Local Dining Options" below.
Please note that the afternoon session each day will start promptly in order to avoid delays in the afternoon schedule.
Afternoon Session Chair: Raymond Chan
2:00-2:30pm
Numerical simulation has become one of the major topics in Computational Science. To promote modeling and simulation of complex problems new strategies are needed allowing for the solution of large, complex model systems. Crucial issues for such strategies are reliability, efficiency, robustness, usability, and versatility. After discussing the needs of large-scale simulation we point out basic simulation strategies such as adaptivity, parallelism and multigrid solvers. To allow adaptive, parallel computations the load balancing problem for dynamically changing grids has to be solved efficiently by fast heuristics. These strategies are combined in the simulation system UG ("Unstructured Grids") being presented in the following. In the second part of the seminar we show the performance and efficiency of this strategy in various applications. In particular, the application and benefit of parallel adaptive multigrid methods to modelling drug permeation through human skin is shown in detail.
2:30-3:00pm
Time discretization is an important issue for time-dependent partial differential equations (PDEs). For the k-th (k is at least 2) order PDEs, the explicit time-marching method may suffer from a severe time step restriction $\tau =O(h^k)$ (where $\tau$ and $h$ are the time step size and spatial mesh size respectively) for stability. The implicit and implicit-explicit (IMEX) time-marching methods can overcome this constraint.
However, for the equations with nonlinear high derivative terms, the IMEX methods are not good choices either, since a nonlinear algebraic system must be solved (e.g. by Newton iteration) at each time step. The explicit-implicit-null (EIN) time-marching method is designed to cope with the above mentioned shortcomings. The basic idea of the EIN method is to add and subtract a sufficiently large linear highest derivative term on one side of the considered equation, and then apply the IMEX time-marching method to the equivalent equation. The EIN method so designed does not need any nonlinear iterative solver, and the severe time step restriction for explicit methods can be removed. Coupled with the EIN time-marching method, we will discuss high order finite difference and local discontinuous Galerkin schemes for solving high order dissipative and dispersive equations. For simplified equations with constant coefficients, we perform analysis to guide the choice of the coefficient for the added and subtracted highest order derivative terms in order to guarantee stability for large time steps. Numerical experiments show that the proposed schemes are stable and can achieve optimal orders of accuracy for both one-dimensional and two-dimensional linear and nonlinear equations.
This talk is based on joint work with Haijin Wang, Qiang Zhang and Shiping Wang, and with Meiqi Tan and Juan Cheng.
3:00-3:30pm
In this presentation, we discuss a few basic questions for PDE learning from observed solution data. Using various types of PDEs as examples, we show 1) how large the data space spanned by all snapshots along a solution trajectory is, 2) if one can construct an arbitrary solution by superposition of snapshots of a single solution, and 3) identifiability of a differential operator from a single solution data on local patches.
3:30-4:00pm
--
4:00-4:30pm
In recent years, deep learning methods have shown their superiority for solving high dimensional PDEs where traditional methods fail. However, for low dimensional problems, it remains unclear whether these methods have a real advantage over traditional algorithms as a direct solver. We discuss the random feature method (RFM) for solving PDEs, a natural bridge between traditional and machine learning-based algorithms. We demonstrate that the method exhibits spectral accuracy and can compete with traditional solvers in terms of both accuracy and efficiency. In addition, we find that RFM is particularly suited for complex problems with complex geometry, where both traditional and machine learning-based algorithms encounter difficulties.
This is joint work with Jingrun Chen, Xurong Chi and Zhouwang Yang.
4:30-5:00pm
The rapid advances in artificial intelligence in the last decade are primarily attributed to the wide applications of deep learning (DL). Yet, the high carbon footprint yielded by larger DL networks is a concern to sustainability. Green learning (GL) has been proposed as an alternative. GL is characterized by low carbon footprints, small model sizes, low computational complexity, and mathematical transparency. It offers energy-effective solutions in cloud centers and mobile/edge devices. GL has three main modules: 1) unsupervised representation learning, 2) supervised feature learning, and 3) decision learning. It has been successfully applied to a few applications. GL has been inspired by DL. The connection between GL and DL will be highlighted.
5:00-6:00pm
The standard poster size for the Conference is 24 inches x 36 inches, in either Portrait or Landscape format. A poster easel, foam board, and clips will be provided for display of each accepted presentation.
Tuesday, December 20
Morning Session Chair: Jinchao Xu
8:00-9:00am
COVID antigen tests (provided by the Conference) are required upon arrival in order to attend the Conference. Name badges will be provided to attendees upon providing a negative test result to the Conference staff.
9:00-9:30am
We discuss our on-going investigation of adaptive finite elements. We describe a posteriori error estimates approximating local interpolation error derived from superconvergent derivative recovery. We next describe and compare several classes of $h$ and $hp$ refinement strategies. Finally, we make an application to parallel adaptive strategies.
9:30-10:00am
There has been a surge of interest in recent years in general-purpose `acceleration' methods that take a sequence of vectors converging to the limit of a fixed point iteration and produce from it a faster converging sequence. A prototype of these methods, one that attracted much attention recently, is the Anderson Acceleration (AA) procedure. This talk will begin with a discussion of these general acceleration methods, focusing on Anderson acceleration, and highlighting the link between AA and secant-type methods. This link will enable us to adapt to the nonlinear context a class of methods rooted in Krylov subspace techniques for solving linear systems.
10:00-10:30am
We will give a gentle introduction of weak KAM theory and then reinterpret Freidlin-Wentzell's variational construction of the rate function in the large deviation principle for invariant measures from the weak KAM perspective. We will use one-dimensional irreversible diffusion process on torus to illustrate some essential concepts in the weak KAM theory such as the Peierls barrier, the projected Mather/Aubry/Mane sets. We provide alternative proofs for Freidlin-Wentzell's variational formulas for both self-consistent boundary data at each local attractors and for the rate function are formulated as the global adjustment for the boundary data and the local trimming from the lifted Peierls barriers. Based on this, we proved the rate function is a weak KAM solution to the corresponding stationary Hamilton-Jacobi equation satisfying the selected boundary data on projected Aubry set, which is also the maximal Lipschitz continuous viscosity solution. The rate function is the selected unique weak KAM solution and also serves as the global energy landscape of the original stochastic process. A probability interpretation of the global energy landscape from the weak KAM perspective will also be discussed. This is a joint work with Yuan Gao of Purdue University.
10:30-11:00am
--
11:00-11:30am
Whether the 3D incompressible Euler equations can develop a finite time singularity from smooth initial data is one of the most challenging problems in nonlinear PDEs. In this talk, we will present a new exciting result with Dr. Jiajie Chen in which we prove finite time blowup of the 2D Boussinesq and 3D Euler equations with smooth initial data and boundary. There are several essential difficulties in establishing such blowup results. We overcome these difficulties by first constructing an approximate self-similar blowup profile using the dynamic rescaling formulation. To establish the stability of the approximate blowup profile, we decompose the linearized operator into a leading order operator plus a finite rank perturbation operator. We use sharp functional inequalities and optimal transport to establish the stability of the leading order operator. To estimate the finite rank operator, we use energy estimates and space-time numerical solutions with rigorous error control. This enables us to establish nonlinear stability of the approximate self-similar profile and prove stable nearly self-similar blowup of the 2D Boussinesq and 3D Euler equations with smooth initial data. This provides the first rigorous justification of the Hou-Luo blowup scenario.
11:30am-12:00pm
Multiscale time dependent partial differential equations (PDE) are challenging to compute by traditional mesh based methods especially when their solutions develop large gradients or concentrations at unknown locations. Particle methods, based on microscopic aspects of the PDEs, are mesh free and self-adaptive, yet still expensive when a long time or a resolved computation is necessary.
We present DeepParticle, an integrated deep learning, optimal transport (OT), and interacting particle (IP) approach, to speed up generation and prediction of PDE dynamics through two case studies on transport in fluid flows with chaotic streamlines:
1) large time front speeds of Fisher-Kolmogorov-Petrovsky-Piskunov equation (FKPP);
2) Keller-Segel (KS) chemotaxis system modeling bacteria evolution in the presence of a chemical attractant.
Analysis of FKPP reduces the problem to a computation of principal eigenvalue of an advection-diffusion operator. A normalized Feynman-Kac representation makes possible a genetic IP algorithm to evolve the initial uniform particle distribution to a large time invariant measure from which to extract front speeds. The invariant measure is parameterized
by a physical parameter (the Peclet number). We train a light weight deep neural network with local and global skip connections to learn this family of invariant measures. The training data come from IP computation in three dimensions at a few sample Peclet numbers.
The training objective being minimized is a discrete Wasserstein distance in OT theory.
The trained network predicts a more concentrated invariant measure at a larger Peclet number
and also serves as a warm start to accelerate IP computation.
The KS is formulated as a McKean-Vlasov equation (macroscopic limit) of
a stochastic IP system. The DeepParticle framework extends and
learns to generate various finite time bacterial aggregation patterns.
Joint work with Zhongjian Wang (University of Chicago)
and Zhiwen Zhang (University of Hong Kong).
12:00-2:00pm
Lunch during the Conference will be on your own (not provided by organizers). For on-campus and other local dining options, please visit "Local Dining Options" below.
Please note that the afternoon session each day will start promptly in order to avoid delays in the afternoon schedule.
Afternoon Session Chair: Tony Chan
2:00-2:40pm: Distinguished Lecture
Joint work with Howard Heaton and Samy Wu Fung.
First-order optimization algorithms are widely used today. Two standard building blocks in these algorithms are proximal operators (proximals) and gradients. Although gradients can be computed for a wide array of functions, explicit proximal formulas are typically available only for specific convex functions. This limits use of proximals in applications where objectives are either unknown analytically (e.g. only available via black-box sampling) or nonconvex. We provide an explicit formula for accurately approximating such proximals. This is derived from a collection of relations between proximals, Moreau envelopes, Hamilton-Jacobi (HJ) equations, heat equations, and importance sampling. In particular, we provide a formula for a smooth approximation of the Moreau envelope and its gradient. The smoothness parameter can be adjusted to act as a denoiser. Our approach applies even when only (possibly noisy) blackbox functions are available. We show effective use of this HJ proximal formula via several numerical examples.
2:40-3:20: Distinguished Lecture
The evolution of data handling systems has accelerated over the past two decades. It has become possible to ingest, cleanse, and process data at formerly stratospheric levels. There are emerging concepts of the so-called modern data stack as well as debates between SQL-centric versus programmatic approaches. There are further debates about separate transactional and analytic systems versus emerging concepts like hybrid transaction/analytics processing. Data science approaches are also evolving with a bias towards open source software. I will discuss my perspectives on the technology and importance of the technology.
3:20-3:50pm
--
3:50-4:20pm
We present a simple, rigorous, and unified framework for solving and learning arbitrary nonlinear PDEs with Gaussian Processes (GPs). The proposed approach: (1) provides a natural generalization of collocation kernel methods to nonlinear PDEs and Inverse Problems; (2) has guaranteed convergence for a very general class of PDEs, and comes equipped with a path to compute error bounds for specific PDE approximations; (3) inherits the state-of-the-art computational complexity of linear solvers for dense kernel matrices; (4) generalizes to the completion of arbitrary Computational Graphs. We then discuss the importance of the choice of the kernel by introducing a GP approach for solving the Navier-Stokes equations with kernels informed about the underlying physics (e.g., the Richardson cascade and the Kolmogorov scaling laws). The first part of this talk is joint work with Yifan Chen, Bamdad Hosseini, and Andrew Stuart.
4:20-4:50pm
In this talk, I will highlight our work in a few key areas of electronic design automation (EDA), ranging from physical design and high-level synthesis for integrated circuit designs to layout synthesis for quantum computing. A common theme is efficient management of exponential growth of the design complexity.
4:50-5:40pm
Tony Chan (moderator), Jason Cong, Bjorn Enqguist, Franca Hoffmann, Stan Osher and Hongkai Zhao
6:30-9:30pm
Attendance of the Banquet is allowed for registered Banquet attendees only. Banquet logistics will be sent via email to all registered Banquet attendees during the week of December 12th.
Wednesday, December 21
Morning Session Chair: Bjorn Engquist
8:00-9:00am
COVID antigen tests (provided by the Conference) are required upon arrival in order to attend the Conference. Name badges will be provided to attendees upon providing a negative test result to the Conference staff.
9:00-9:30am
In this talk, I will present some results from our recent studies on Wasserstein Hamiltonian Flow (WHF), which describes a mathematical principle: the density of a Hamiltonian flow in sample space is a Hamiltonian flow in density manifold. In the first part, examples are used to illustrate the concept and properties of WHF. A control formulation is also used to show that WHF can be viewed as the Euler-Lagrange equation in the density space. In addition, its connections to the geodesic equation on Wasserstein manifold, Schrodinger equation, and Schrodinger bridge problem will be discussed. Various structure preserving computation methods and their applications are briefly demonstrated in the second part of the presentation.
9:30-10:00am
In an extremal eigenvalue problem, one considers a family of eigenvalue problems, each with discrete spectra, and extremizes a chosen eigenvalue over the family. In this talk, we discuss eigenvalue problems defined on Riemannian manifolds and extremize over the metric structure. For example, we consider the problem of maximizing the principal Laplace-Beltrami eigenvalue over a family of closed surfaces of fixed volume. Computational approaches to such extremal geometric eigenvalue problems present new computational challenges and require novel numerical tools, such as the parameterization of conformal classes and the development of accurate and efficient methods to solve eigenvalue problems on domains with non-trivial genus and boundary. We highlight recent progress on computational approaches for extremal geometric eigenvalue problems, including (i) maximizing Laplace-Beltrami eigenvalues on closed surfaces and (ii) maximizing Steklov eigenvalues on surfaces with boundary.
10:00-10:30am
Bayesian sampling and neural networks are seemingly two different machine learning areas, but they both deal with many particle systems. In sampling, one evolves a large number of samples (particles) to match a target distribution function, and in optimizing over-parameterized neural networks, one can view neurons particles that feed each other information in the DNN flow. These perspectives allow us to employ mean-field theory, a powerful tool that translates dynamics of many particle system into a partial differential equation (PDE), so rich PDE analysis techniques can be used to understand both the convergence of sampling methods and the zero-loss property of over-parameterization of ResNets. I would like to showcase the use of mean-field theory in these two machine learning areas, and I'd also love to hear feedbacks from the audience on other possible applications.
10:30-11:00am
--
11:00-11:30am
The use of technology in financial institutions has increased tremendously in the past decades. Financial technology, Fintech, is a new trend that revolutionized the financial industry. Many of the financial activities that have been traditionally done based on human skills have recently been replaced or will be replaced by computer systems. Fintech is a broad subject, and this talk will focus on quantitative analytics and efficient computational techniques. An important application is the pricing of financial derivatives (e.g. options). We will start from the classical Black-Scholes model, which is a partial differential equation that gives the no-arbitrage value of a so-called European option. We will present robust and efficient numerical PDE methods such as multigrid for solving various types of options. However, as the model becomes more complicated, especially for multi-asset option pricing which leads to a high dimensional problem, traditional PDE approaches become intractable. We will discuss how machine learning techniques can be used in solving financial problems. Our numerical results show that machine learning with carefully designed neural networks can be a powerful tool for solving complicated option pricing problems.
11:30am-12:00pm
This talk will consider image processing problems such as texture segmentation, repeating signal pattern recognition and image vectorization to identifying differential equation from noisy data. Texture and pattern recognition is a nontrivial task, we utilize a simple method to effectively identify and distinguish patterns for segmentation and pattern recognition. Extending from IDENT and robust IDENT, we discuss using weak form for differential equation identification. We consider both ODE and PDE models.
12:00-12:30pm
Numerical linear algebra software is being reinvented to allow tuning the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point applications have a history of "oversolving" relative to expectations for many models. So often are real datatypes defaulted to double precision in practice that GPUs did not gain wide acceptance until they provided in hardware operations not required in their original domain of graphics. However, many operations considered at a blockwise level allow for lower precision and many blocks can be approximated with low rank near equivalents, leading to smaller memory footprint. This implies higher residency on memory hierarchies, leading in turn to less time and energy spent on data copying, which may even dwarf the savings from fewer and cheaper flops. We provide examples from several application domains, including a 2022 Gordon Bell finalist computation that benefits from both blockwise lower precisions and lower ranks.
12:30-2:00pm
Lunch during the Conference will be on your own (not provided by organizers). For on-campus and other local dining options, please visit "Local Dining Options" below.
Please note that the afternoon session each day will start promptly in order to avoid delays in the afternoon schedule.
Afternoon Session Chair: Haomin Zhou
2:00-2:30pm
Computational Quasiconformal (CQC) Geometry studies the deformation pattern between shapes. It has found important applications in imaging science, such as image registration, image analysis and image segmentation. With the advance of deep learning techniques, the incorporation of CQC theories to deep neural network can further improve the performance for these imaging tasks in both the efficiency and accuracy. In this talk, I will give an overview on how CQC and deep learning can play an important role in image processing for various applications, such as in medical imaging, computer visions and computer graphics.
2:30-3:00pm
In this talk, I will discuss our recent work on computational methods for inverse mean-field games (MFG). I will start from a low-dimensional setting using conventional discretization methods and discuss an algorithm for solving the corresponding inverse problem based on a bi-level optimization method. After that, I will extend to high-dimensional problems by bridging the trajectory representation of MFG with a special type of deep generative model, normalizing flows.
3:00-3:30pm
We have entered the Data Age - the acquisition and sharing of vast quantities of data are now easier than ever. Knowledge elicited from both real-world and virtual-world data can be exploited to innovate new design strategies, learn human preferences, and devise new user interfaces. I will begin my talk with "Make It Home" - the seminal work on data-driven interior scene generation by automatic optimization which has inspired many follow-up research works. I will then highlight some of the inspired work from our group related to outfit synthesis, set dressing, creative 3D modeling, and mid-scale layout generation. After that, I will shift the focus to my recent work on using advanced generative models for novel content creation problems such as 2D and 3D style transfer, 360-image neural scene decoration, and 360-NerF. I will conclude my talk by showing how content generation can facilitate my new research direction on marine computer vision.
Organizing Committee
Raymond Chan, City University of Hong Kong | Jinchao Xu, Penn State University/KAUST |
Thomas Hou (Chair), Caltech | Haomin Zhou, Georgia Tech |
Stanley Osher, UCLA |
COVID Guidelines
- The Conference organizers will provide and require COVID antigen tests for all participants upon arrival on their first day at the Conference. Please allow at least 30 minutes for check-in upon arriving at the event; if you are arriving before the morning session begins on Monday, it is preferred that you allow 45 minutes for check-in.
- Caltech requires COVID vaccination for all employees and visitors, and by entering the Caltech campus, you attest to being fully vaccinated or having a medical exemption. More details can be found at https://together.caltech.edu/resources/events.
- Masks (surgical/N95/KN95) will be required inside our Auditorium and classrooms. We will have a supply of KN95 masks at the Check-In Table for attendees who request them.
Masking guidelines are subject to change. We will update this website with any policy changes.
Registration
Registration is required in order to attend the Conference and/or Conference Banquet. Registration is now sold out.
If you have questions about registration, please contact us.
If you have already registered and wish to edit your registration, please visit the online registration website or contact us.
Please note that due to technical issues, remote/virtual attendance of the Conference via Zoom will no longer be possible due to technical issues. Attendance will be in-person only. We understand that this news is disappointing for our attendees who have planned to attend remotely. We hope that you will be able to join us at Caltech in December.
Local Accommodations, Directions, and Parking
Hilton Pasadena
168 South Los Robles Avenue, Pasadena, CA 91101
(626) 577-1000
**The Hilton Pasadena will be the location for the Conference Banquet in the evening of December 20th.
A block of discounted rooms has been reserved at the Hilton Pasadena for booking by Conference attendees. To take advantage of this discount, reserve directly with the hotel:
- Via phone at 626-577-1000 or 1-800-HILTONS
Please be sure to reference the Caltech Applied Mathematics Conference Dinner Event name and provide the code: CAM5 - Via the online discounted Conference booking link.
The Athenaeum
Caltech's on-campus faculty club offers a limited number of hotel rooms, which may be more expensive than other local options. Reservations at the Athenaeum must be arranged by the Conference organizers directly. If you are interested in reserving a room at the Athenaeum, please contact us.
The Saga Motor Hotel -- ~0.6 miles from Conference Venue at Caltech
Web Special: Starts at $89/night + 15% tax
1633 E Colorado Blvd., Pasadena, CA 91106
www.thesagamotorhotel.com
(626) 795-0431
Hyatt Place Pasadena -- ~1.4 miles from Venue
399 E Green St, Pasadena, CA 91101
www.hyatt.com
(626) 788-9108
Sheraton Pasadena -- ~1.5 miles from Venue
303 Cordova St, Pasadena, CA 91101
www.marriott.com
(626) 469-8100
Westin Pasadena -- ~1.9 miles from Venue
191 N Los Robles Ave, Pasadena, CA 91101
www.marriott.com
(626) 792-2727
Courtyard by Marriott Pasadena/Old Town -- ~2.7 miles from Venue
180 N Fair Oaks Ave, Pasadena, CA 91103
www.marriott.com
(626) 403-7600
The Conference talks will be held in Room 105 of the Annenberg Center for Information Science and Technology, building #16 on a campus map.
Please refer to the Center's location on Google maps for directions and navigation.
Visitors traveling from LAX (Los Angeles International Airport) and BUR (Hollywood Burbank Airport) to Caltech tend to choose a ride service like Uber or Lyft. Alternatively, you can use SuperShuttle (advance reservation recommended via supershuttle.com), rent a car at the airport, or pick up a taxi cab at a designated location at the airport.
- Information about taxi pickup locations can be found below, along with other ground transportation details:
The nearest Caltech parking structure to the Annenberg Center for Information Science and Technology is Structure #4, located at 370 South Holliston Avenue, Pasadena. Parking permits must be displayed in order to park in visitor (unmarked) parking spaces. Permits can be purchased at pay stations located in campus parking lots and structures.
More information about visitor parking can be found at: https://parking.caltech.edu/parking-info/visitor-parking
Lunch during the Conference will be on your own (not provided by the organizers). Please see below for dining suggestions both on- and off-campus:
On Campus
To view the different dining options on campus along with current hours, please see:
https://dining.caltech.edu/where-to-eat-
Off Campus
There are many great restaurants, bars, and lounges within walking distance of campus, including the Old Town Pasadena vicinity as well as the South Lake Avenue Shopping District.
- South Lake Avenue Shopping District:
- http://www.southlakeavenue.org/business-directory/food-dining/
- South Lake Avenue is within walking distance of campus (approximately 14 minutes), and has many dining options from fast food (Chipotle, Veggie Grill, Panda Express) to more formal full-service establishments.
- For directions to the South Lake Avenue Shopping District vicinity from Caltech's campus, click here.
- Ginger Corner Market: http://gingercornermarket.com/
- Ginger Corner Market is a small café within walking distance of campus (approximately 6 minutes).
- For directions to the Ginger Corner Market from Caltech's campus, please click here.
- Old Town Pasadena: https://www.oldpasadena.org/visit/directory/dine/
- Old Town Pasadena is an approximate 9-minute drive from Caltech's campus.
- For directions to the Old Town Pasadena vicinity from Caltech's campus, please click here.