- Econometrics
- Time Series Analysis
Degree
- Ph.D. in Economics (University of North Carolina at Chapel Hill)

Research Interests
Research Interests: Econometrics, time series analysis
My field of specialization is econometrics. To discover laws in the real-world economy, we need to perform plausible data analysis and to draw economic implications from the empirical result. Econometrics helps us accomplish these goals by developing statistically correct and economically insightful methods. In that sense, econometricians could be regarded as engineers who supply applied economists with useful tools for empirical analysis.
Economic data can broadly be classified into time series data and cross-section data, each of which requires different econometric methods. Time series data are data with multiple time periods for a single individual; a simple example would be the monthly Japanese unemployment rate from January 2000 through December 2020. Cross-section data are data with multiple individuals at a single time period; a simple example would be country-wise unemployment rates in December 2020. A primary target of my research is time series data, but recently I am extending my scope to cross-section data as well. Three ongoing projects of mine are introduced below.
1. Mixed Data Sampling (MIDAS) econometrics
Classical time series analysis requires all target variables to have the same sampling frequency. Consider analyzing a dynamic interaction between unemployment and gross domestic product (GDP), for example. In many countries, unemployment statistics are announced monthly while GDP statistics are announced quarterly. The classical approach forces us to aggregate monthly unemployment data into a quarterly level before formulating a bivariate model for the unemployment and GDP. Such a temporal aggregation causes the loss of information and consequently lowers the precision of statistical inference.
A new strand of research that emerged around 2004 attempts to exploit all data available whatever their sampling frequencies are. In the recent literature, such an approach is broadly acknowledged as “Mixed Data Sampling (MIDAS)” or “mixed frequency” econometrics. Many researchers report that the MIDAS approach leads to the higher prediction accuracy of economic indicators than the previous single-frequency approach.
I have been studying theory and applications of MIDAS since I was a Ph.D. student at the Department of Economics, University of North Carolina at Chapel Hill (2009—2014). First, I defined Granger causality (i.e., incremental predictive ability) with mixed frequency data, and then proposed a test for the mixed frequency Granger causality. Second, I tested Granger causality from weekly interest rate spread to quarterly economic growth in the U.S. Third, I investigated the source of the sluggish private investment in the Lost Decade of Japan based on the MIDAS approach.

2. White noise tests and financial applications
A time series is called white noise if its future values are uncorrelated with its past and present values. Uncorrelatedness is essentially equivalent to unpredictability, hence it is of practical interest to test whether an economic time series is white noise or not. Testing the white noise hypothesis might sound like an elementary problem, but is actually a hard one. A major challenge is that serial uncorrelatedness is a much weaker condition than serial independence. It is therefore non-trivial to establish formal asymptotic theory under the null hypothesis of white noise.
After a few years of research since 2015, I proposed a new white noise test that is based on the largest sample autocorrelation across lags. I found via mathematical derivations and numerical experiments that the proposed test achieves the higher statistical accuracy than existing white noise tests. As an empirical application, I tested the white noise hypothesis of stock returns (i.e., the weak form efficiency of stock markets). My empirical results suggest that daily returns of several major stock price indices are serially correlated (i.e., partially predictable) during crisis periods.

3. Unifying copulas, missing data analysis, and causal inference
Recently, I unified copulas, missing data analysis, and causal inference in the set-up of cross-section analysis. Copulas are statistical distributions which can capture complex interdependence among multiple variables with a relatively small number of parameters. Regression analysis exploiting copulas is called the copula-based regression, and it is increasingly popular as a new approach for achieving both flexible and parsimonious specifications.
The existing literature of copula-based regression is built upon a complete-data framework, where the regressand and regressors are observed for all individuals. Missing data may well arise in the real world for various reasons, however. Taking corporate finance data as an example, there are many firms which disclose their sales but not their research and development (R&D) expenses. When there are missing observations in the regressand or regressors, a direct application of the existing copula-based regression could result in a biased regression curve.
After a few years of research since 2017, I established the correct methodology of copula-based regression under the presence of missing data. A core insight of my approach is to unify missing data analysis and causal inference. Causal inference, which is also called program evaluation, attempts to estimate the causal effect of a certain program (e.g., launching electronic voting, imposing the consumption tax, etc.); it is one of the most active area of research in modern econometrics. An outcome of the program is observed if and only if the program is implemented. There is analogy between whether or not data are observed and whether or not an outcome of the program is executed. Taking advantage of this analogy, I established a proper way to handle missing data in the copula-based regression.
A well-known technique in causal inference is to assign proper weights on the group for which the program is implemented (i.e., treatment group) and on the group for which the program is not implemented (i.e., control group). I adopted a similar approach to deal with missing data; the copula-based regression is fitted to weighted observations where the weights are design to balance the group of observed individuals and the group of missing individuals. Under certain regularity conditions, the proposed approach is guaranteed to produce an unbiased regression curve.
As an empirical application, I fitted the copula-based regression to German manufacturing firms, where the regressand is R&D expenses and the regressor is sales. In my dataset, there is a clear tendency that firms with larger sales more likely disclose their R&D expenses. Taking advantage of this tendency, I assigned proper weights on each firm and then performed the copula-based regression. The proposed approach and the existing approach which assigns equal weights on all firms (i.e., essentially ignores missing data) produce strikingly different regression curves. This contrast suggests that the proposed approach dominates the existing approach in the sense that the missing data problem is properly addressed.

Lectures and Seminars
Teaching Experience
Econometrics (Undergraduate)
Analysis of Stationary Time Series (Graduate)
Analysis of Nonstationary Time Series (Graduate)
“Econometrics”
“Econometrics” is an undergraduate-level course on econometrics (taught in Japanese). To discover laws in the real-world economy, we need to perform plausible data analysis and to draw economic implications from the empirical result. Econometrics helps us accomplish these goals by developing statistically correct and economically insightful methods. In this course, I teach the theory and practice of econometrics in a clear-cut way.
Topics of this course include linear regression, ordinary least squares, the coefficient of determination, Gauss-Markov Theorem, t-test, and F-test. These are all standard topics, but I will present some original, interesting empirical applications on real estate and corporate finance data.

“Analysis of Stationary Time Series” & “Analysis of Nonstationary Time Series”
“Analysis of Stationary Time Series” and “Analysis of Nonstationary Time Series” are graduate-level courses on theory and methods of time series econometrics (taught in English). A primary goal of these courses is that students will be able to perform a sensible analysis on time series data in their own theses. “Analysis of Stationary Time Series” covers the stationary time series literature including autoregressive moving average (ARMA), generalized autoregressive conditional heteroscedasticity (GARCH), and vector autoregression (VAR). “Analysis of Nonstationary Time Series” covers the nonstationary time series literature including unit root, spurious regression, cointegration, and vector error correction model (VECM). While a main focus will be put on theoretical and numerical aspects, I will also present empirical illustrations as much as possible in order to keep a reasonable balance between theory and practice.
A time series is called stationary if it fluctuates temporarily around a fixed mean over time; it is called nonstationary if it has a time-dependent mean or deviates permanently from a fixed mean. Many economic time series are nonstationary in levels and stationary in first differences. Hence, it is recommended for graduate students who major in econometrics to take both “Analysis of Stationary Time Series” and “Analysis of Nonstationary Time Series” so that they can handle a variety of time series data.

Seminar for undergraduate students
I began to be an academic adviser of undergraduate students in April 2020. My seminar course covers the theory, methodology, and practice of econometrics so that students will become able to perform statistically meaningful analysis of whatever economic data they are interested in. Juniors will be assigned undergraduate-level textbooks or recent articles or reports on the real-world economy; seniors are supposed to write and present their own theses. The substance of this seminar course may change depending on the academic interests of my students.
Seminar for graduate students
I began to be an academic adviser of graduate students in April 2020. In my seminar course, we write scholarly papers on econometrics and especially time series analysis. The papers are supposed to be a part of students’ master’s or Ph.D. theses and submitted to peer-reviewed English journals. In the papers, students are expected to propose new econometric methods that outperform the existing methods, and to demonstrate the superiority of the proposed methods mathematically, numerically, and empirically.
Message
When I was an undergraduate student, I was fascinated with econometrics as a powerful tool for analyzing the economy. Since then, my dream had always been to become a professional researcher and teacher in econometrics. At the first quarter of 2020, I promote to a tenured associate professor and teach Econometrics for the first time in my career, which makes my dream come true. I will make my course as clear and inspiring to student.
I always enjoy teaching “Analysis of Stationary Time Series” and “Analysis of Nonstationary Time Series”. It is exciting to learn the theory and practice of time series analysis. Discovering time series properties of economic variables will definitely have a positive impact on the economic literature and our entire society. I hope my students will understand through my courses how interesting and useful time series analysis is.
Main Publications
Refereed journal articles
- E. Ghysels, J. B. Hill, and K. Motegi (2016). Testing for Granger causality with mixed frequency data. Journal of Econometrics, vol. 192, pp. 207-230.
- K. Motegi and A. Sadahiro (2018). Sluggish private investment in Japan’s Lost Decade: Mixed frequency vector autoregression approach. North American Journal of Economics and Finance, vol. 43, pp. 118-128.
- J. B. Hill and K. Motegi (2019). Testing the white noise hypothesis of stock returns. Economic Modelling, vol. 76, pp. 231-242.
- S. Hamori, K. Motegi, and Z. Zhang (2019). Calibration estimation of semiparametic copula models with data missing at random. Journal of Multivariate Analysis, vol. 173, pp. 85-109.
- J. B. Hill and K. Motegi (2020). A max-correlation white noise test for weakly dependent time series. Econometric Theory, vol. 36, pp. 907-960.
- K. Motegi, X. Cai, S. Hamori, and H. Xu (2020). Moving average threshold heterogeneous autoregressive (MAT-HAR) models. Journal of Forecasting, vol. 39, pp. 1035-1042.
- E. Ghysels, J. B. Hill, and K. Motegi (2020). Testing a large set of zero restrictions in regression models, with an application to mixed frequency Granger causality. Journal of Econometrics, vol. 218, pp. 633-654.
- S. Hamori, K. Motegi, and Z. Zhang (2020). Copula-based regression models with data missing at random. Journal of Multivariate Analysis, vol. 180, #104654.
- C. Ai, O. Linton, K. Motegi, and Z. Zhang (2021). A unified framework for efficient estimation of general treatment models. Quantitative Economics, vol. 12, pp. 779-816.
- K. Motegi and Y. Iitsuka (2023). Inter-regional dependence of J-REIT stock prices: A heteroscedasticity-robust time series approach. North American Journal of Economics and Finance, vol. 64, #101840.
- K. Motegi and S. Woo (2024). A note on the exponentiation approximation of the birthday paradox. Communications in Statistics – Theory and Methods, vol. 53, pp. 6417-6426.
- K. Motegi and S. Hamori (2025). Conditional threshold effects of stock market volatility on crude oil market volatility. Energy Economics, vol. 143, #108189.
- K. Motegi and S. Hayashi (2025). A groupwise approach to the birthday paradox. Communications in Statistics – Theory and Methods, DOI: 10.1080/03610926.2025.2505586
Contact
motegi(at)econ.kobe-u.ac.jp
Office hours
By appointment