shortcut to content
Minnesota State University, Mankato
Minnesota State University, Mankato

Seminars

Page address: http://cset.mnsu.edu/mathstat/seminars/

Fall Department Seminars

Universal Turing Machine

Tuesday, September 26th, 4:00-5:00pm, WH 288A.

Refreshment will be provided at room WH 291 at 3:30-4:00pm.

Speaker: Dr. Kyung Il Lee, Department of Mathematics and Statistics, MSU

Abstract: Throughout the 19th Century, the diverse fields of mathematics was getting more and more abstract. Consequently, at the beginning of the 20th century, mathematics itself was challenged by the discovery of a couple of paradoxes such as Russell’s paradox. In this talk, we discuss what was identified as a challenge, namely Hilbert’s Program, and describe Turing’s solution to Entscheidungsproblem (decision problem), which is destructive to one aspect of Hilbert’s Program. Next, the idea of the modern stored-program computers will be discussed, which dates back to the notion of Turing’s universal machine conceived when Turing was answering the decision problem by a diagonalization work against the most convincing mathematical model of computation, namely Turing machine model.

Computing Graph and Group Automorphisms with Mathematica

Thursday, October 5th, 4:00-5:00pm, WH 288A.

Refreshment will be provided at room WH 291 at 3:30-4:00pm.

Speaker: Prof. Dan Singer, Department of Mathematics and Statistics, MSU

Abstract:Mathematica is a powerful programming tool for implementing mathematical algorithms, manipulating large data sets, testing conjectures, and making abstract mathematical concepts concrete. I will demonstrate its use by computing all digraph automorphisms of a directed graph D. I will then find all group automorphisms of a given group G by finding all automorphisms of the associated Cayley digraph D = T(G; S) with respect to a generating set S. While no new mathematical ground is being broken here, my purpose is to demonstrate how to design and implement mathematical algorithms in software and to encourage mathematics students to consider learning Mathematica or the equivalent to do the same.

An Efficient Method for Band-limited Extrapolation by Regularization

Tuesday, October 24th, 4:00-5:00pm, WH 288A.

Refreshment will be provided at room WH 291 at 3:30-4:00pm.

Speaker: Dr. Weidong Chen, Department of Mathematics and Statistics, MSU

Abstract: In this presentation, I will discuss the problem that I will attack: An Efficient algorithm for solving a band-limited extrapolation. The model is the following: a function f(t) in L^1(R) is band-limited if its Fourier transform F(w) is supported in a finite interval. Then f(t) can be recoverd from F(w) by the inverse Fourier transform by integration on that interval. The extrapolation problem is: Given f(t) on [-T,T], find f(t) outside that interval, where T>0 is a constant.

A regularized spectral estimation formula and a regularized iterative algorithm for band-limited extrapolation are presented. The ill-posedness is taken into account. First the Fredholm equation is regularized. Then it is transformed to a differential equation in the case where the time interval is R. A fast algorithm to solve the differential equation is given by the finite difference, and a regularized spectral estimation formula is obtained. Then a regularized iterative extrapolation algorithm is introduced and compared with the Papoulis and Gerchberg algorithm.

Solving Quadratic Diophantine Equation

Thursday, November 2nd, 5:00-6:00pm, WH 285.

Speaker: Prof. Dan Singer, Department of Mathematics and Statistics, MSU

Abstract: Let a, b, and c be integers. We will demonstrate how to find all integer pairs (x,y) that satisfy the quadratic Diophantine equation ax^2+by=c. We will also provide criteria for deciding whether or not any solution to this equation can be found. We will introduce elementary concepts from number theory as needed: Euclid's algorithm, properties of prime numbers, the Chinese Remainder Theorem, the theorems of Fermat and Wilson, Euler's criterion for quadratic residues, and the Gauss Reciprocity Theorem.

High Fidelity Simulations of Flow Over a Biofilms based on the Cahn-Hilliard and Navier-Stokes Equations

Tuesday, November 21st, 4:00-5:00pm, WH 288A.

Refreshment will be provided at room WH 291 at 3:30-4:00pm.

Speaker: Nathan McClanahan from South Dakota State University (Presenter), joint work with Nicholas Stegmeier, Jeffrey Doom, and Jung-Han Kimn

Abstract: Biofilms are attached microbial communities made of many different components. Biofilms are found throughout nature as well as industrial and medical settings. Understanding how biofilms spread is important in prevention and treatment of diseases and contamination. To model a biofilm we used an energy based model, starting with the Cahn-Hilliard equation using the Flory-Huggins equation. We will give a brief description of the background of these equations. We will discuss the implementation of efficient parallel simulation procedures based on parallel numerical algorithms and toolkits including PETSc (Portable Extensible Toolkit for Scientific Computing) which is developed at Argonne National Laboratory. We will also discuss the ongoing collaborative work to combine the Cahn-Hilliard and Navier-Stokes equations into a single system. This system will use the Navier-Stokes equation to handle the flow outside of the biofilm and the Cahn-Hilliard equation to model the interface between the fluid and biofilm. Results consistent with observations in nature will be discussed as well as future work and applications of the combined model.

Speaker: Dr. Nathan McClanahan, South Dakota State University

Spring Department Seminars

Mathematical Methods and Modeling in the National Security Sciences

Monday, March 27th, 4:00-5:00pm, WH 284A.

Refreshment will be provided at room WH 291 at 3:30-4:00pm.

Speaker: Dr. Aaron Luttman, Manager, Diagnostic Research and Material Studies Nevada National Security Site

Abstract: While most people are familiar with many of the military aspects of national security, the scientific enterprise in support of national security is less well known. The National Nuclear Security Administration (NNSA) is a semi-autonomous agency within the U.S. Department of Energy that oversees the nation’s nuclear security science, from nuclear non- and counter-proliferation technologies to nuclear emergency response (like the Fukushima disaster in Japan) to the science of maintaining the U.S. nuclear weapons stockpile. The NNSA supports a scientific enterprise of more than 50,000 scientists, technicians, and engineers, and, in this presentation, we will introduce some of the latest scientific developments that are underway in support of U.S. nuclear security, including current mathematical research associated with the chemistry and physics of dynamic material studies, which involves explosively-driven experimentation in material science. In addition to some actual mathematical case studies at the cutting edge of nuclear security science, we will discuss some of the national policies that drive the science as well as how new graduates in science, technology, engineering, and mathematics can get involved in this research through internships and support for graduate studies.

Model Average Versus Model Selection: A Bayes Perspective

Friday, April 7th, 11:00-11:50am, AH 310.

Speaker: Dr. Tri Le (speaker) and Bertrand Clarke, University of Nebraska-Lincoln

Abstract: We compare the performance of five model average predictors -- stacking, Bayes model averaging, bagging, random forests, and boosting -- to the components used to form them. In all five cases we provide conditions under which the model average predictor performs as well or better than any of its components. This is well known empirically, especially for complex problems, although few theoretical results seem to be available. Moreover, all five of the model averages can be regarded as Bayesian. Stacking is the Bayes optimal action in an asymptotic sense under several loss functions. Bayes model averaging is known to be the Bayes action under squared error. We show that bagging can be regarded as a special case of Bayes model averaging in an asymptotic sense. Random forests are a special case of bagging and hence likewise Bayes. Boosted regression is a limit of Bayes optimal boosting classifiers. We have limited our attention to the regression context since that is where model averaging techniques differ most often from current practice.

Classification of protein binding ligands using their structural dispersion

Monday, April 10th, 9:00-9:50am, Nelson Hall 003.

Speaker: Dr. Galkande Premarathna, Texas Tech University

Abstract: It is known that a protein’s biological function is in some way related to its physical structure. Many researchers have studied this relationship both for the entire backbone structures of proteins as well as their binding sites, which are where binding activity occurs. However, despite this research, it remains an open challenge to predict a protein’s function from its structure. The main purpose of this research is to gain a better understanding of how structure relates to binding activity and to classify proteins according to function via structural information. We approach the problem from the dataset compiled by Kahraman et al (2007) and extended Kahraman dataset. There we calculated the covariance matrices of site’s coordinates which use the distance of each atom to the center of mass and calculate the distance from an atom to the 1st, 2nd and 3rd principal axis. Then, we performed classification on these matrices using a variety of techniques, including nearest neighbor. Finally, we compared the performance of this model based technique with alignment based techniques.

Multiple Imputation for Missing Data and Disclosure Limitation

Tuesday, April 11th, 11:00-11:50am, WH 284.

Speaker: Dr. Christine Kohnen, Duke University

Abstract: Multiple Imputation (Rubin, 1987) and its underlying principles can be used in applications ranging from missing data to disclosure limitation. Regardless of the scenario, valid inferences can be obtained using either the standard combining rules and variance estimates of multiple imputation or variants of the rules derived specifically for disclosure limitation. The focus of this seminar will be on two applications of multiple imputation. The first uses multiple imputation to help determine an overall household income distribution of a set of students, where roughly 30% of the data are unknown. The second application is based on multiple imputation for disclosure limitation, such that sensitive and non-sensitive data can be released through the creation of partially synthetic data. The seminar will conclude with a discussion on how these two methods can be combined for use in other applications.

Mathematica Demonstration

Thursday, April 13th, 11:00-11:50pm, including Q & A, WH 284.

Speaker: Matt Woodbury from Wolfram Research in Education and Research

Abstract: I will begin with a technical overview of Mathematica, as well as briefly touching on the creation of Wolfram|Alpha. Next, we discuss emerging trends in technology and what is currently available (or being developed) to support those trends. Then, to give you a sense of what's possible, I'll discuss how other organizations use these tools for teaching and research.

Parametric analysis of “Hemophilia A” and Analytical Modeling of Democracy

Friday, April 14th, 11:00-11:50am, AH 310.

Speaker: A K M Raquibul Bashar, University of South Florida

Abstract: This study is to show parametric analysis of a rare disease ‘Hemophilia A’ using the data from Centers for Disease Control and Prevention (CDC). This parametric analysis will enable us to answer some of the basic critical questions such as: What is the distribution of severity levels of patients? Is there any dependency between and among severity levels, inhibitor history, and races? Those were answered by some classical parametric analysis of categorical variables.

Also, this presentation will show a statistical model constructed by using data from Economics Intelligence Unit’s (EIU) that is being collected from 167 countries around the world. This analytical model results in predicting the democracy score which can be used to rank the countries around the globe to categories one the classifications defined by EIU as ‘full democracy’, ‘flawed democracy’, ‘hybrid democracy’, and ‘authoritarian regimes’. The EIU performs descriptive analysis to classify the countries around the globe. We developed a statistical model using the EIU data to estimate the democracy score and proceed to classify the countries. The proposed statistical model is of high quality that will reflect on the accuracy of our classification.