Econometrics is the application of mathematics,
statistical methods, and, more recently, computer
science, to economic data and is described as the branch of economics that
aims to give empirical
content to economic relations. More precisely, it is "the quantitative analysis
of actual economic phenomena based on the concurrent development of theory and
observation, related by appropriate methods of inference." An introductory
economics textbook describes econometrics as allowing economists "to sift
through mountains of data to extract simple relationships." The first
known use of the term "econometrics" (in cognate form) was
by Polish economist Paweł Ciompa in 1910. Ragnar Frisch is credited with coining
the term in the sense in which it is used today.
Econometrics is the intersection of economics, mathematics,
and statistics. Econometrics adds empirical content to economic theory allowing
theories to be tested and used for forecasting and policy evaluation.
Basic econometric models: linear regression
The basic tool for econometrics is the linear regression model. In modern
econometrics, other statistical tools are frequently used, but linear
regression is still the most frequently used starting point for an analysis.
Estimating a linear regression on two variables can be visualized as fitting a
line through data points representing paired values of the independent and
dependent variables.
For example, consider Okun's law,
which relates GDP growth to the unemployment rate. This relationship is
represented in a linear regression where the change in unemployment rate () is
a function of an intercept (), a given value of GDP growth multiplied by a
slope coefficient and an error term,
:
The unknown parameters and can be estimated. Here is
estimated to be −1.77 and is estimated to be 0.83. This means that if GDP
growth increased by one percentage point, the unemployment rate would be
predicted to drop by .94 points (−1.77*1+0.83). The model could then be tested
for statistical significance as to whether an
increase in growth is associated with a decrease in the unemployment, as hypothesized.
If the estimate of were not significantly different from 0, the test would fail
to find evidence that changes in the growth rate and unemployment rate were
related.
Theory
Econometric theory uses statistical theory to evaluate and develop
econometric methods. Econometricians try to find estimators
that have desirable statistical properties including unbiasedness, efficiency, and consistency. An estimator is unbiased if its
expected value is the true value of the parameter; It is consistent if it
converges to the true value as sample size gets larger, and it is efficient if
the estimator has lower standard error than other unbiased estimators for a
given sample size. Ordinary least squares (OLS) is often used
for estimation since it provides the BLUE or "best linear unbiased
estimator" (where "best" means most efficient, unbiased
estimator) given the Gauss-Markov assumptions. When these assumptions are
violated or other statistical properties are desired, other estimation
techniques such as maximum likelihood estimation, generalized method of moments, or generalized least squares are used. Estimators
that incorporate prior beliefs are advocated by those who favor Bayesian statistics over traditional, classical
or "frequentist" approaches.
Methods
Applied econometrics uses theoretical econometrics
and real-world data for assessing economic theories, developing econometric
models, analyzing economic history, and forecasting.
Econometrics may use standard statistical
models to study economic questions, but most often they are with observational data, rather than in controlled
experiments. In this, the design of observational studies in econometrics
is similar to the design of studies in other observational disciplines, such as
astronomy, epidemiology, sociology and political science. Analysis of data from
an observational study is guided by the study protocol, although exploratory data
analysis may by useful for generating new hypotheses. Economics often
analyzes systems of equations and inequalities, such as supply
and demand hypothesized to be in equilibrium. Consequently, the field of
econometrics has developed methods for identification and estimation
of simultaneous-equation
models. These methods are analogous to methods used in other areas of
science, such as the field of system identification in systems
analysis and control theory. Such methods may allow researchers
to estimate models and investigate their empirical consequences, without
directly manipulating the system.
One of the fundamental statistical methods used by
econometricians is regression analysis.Regression methods are
important in econometrics because economists typically cannot use controlled experiments. Econometricians often
seek illuminating natural experiments in the absence of evidence
from controlled experiments. Observational data may be subject to omitted-variable bias and a list of other
problems that must be addressed using causal analysis of simultaneous-equation
models.
Artificial intelligence methods
Artificial Intelligence has become
important for building econometric models and for use in decision making. Artificial
intelligence is a nature-inspired computational paradigm which has found usage
in many areas. It allows economic models to be of arbitrary complexity and also
to be able to evolve as the economic environment also changes. For example,
artificial intelligence has been applied to simulate the stock market, to model
options and derivatives
as well as model and control interest rates.
Experimental economics
In recent decades, econometricians have increasingly turned
to use of experiments to evaluate the
often-contradictory conclusions of observational studies. Here, controlled and
randomized experiments provide statistical inferences that may yield better
empirical performance than do purely observational studies.
Data
Data sets to which econometric analyses are applied can be
classified as time-series data, cross-sectional data, panel data,
and multidimensional panel data.
Time-series data sets contain observations over time; for example, inflation
over the course of several years. Cross-sectional data sets contain
observations at a single point in time; for example, many individuals' incomes
in a given year. Panel data sets contain both time-series and cross-sectional
observations. Multi-dimensional panel data sets contain observations across
time, cross-sectionally, and across some third dimension. For example, the Survey of Professional Forecasters
contains forecasts for many forecasters (cross-sectional observations), at many
points in time (time series observations), and at multiple forecast horizons (a
third dimension).
Instrumental variables
In many econometric contexts, the commonly-used ordinary least squares method may not
recover the theoretical relation desired or may produce estimates with poor
statistical properties, because the assumptions for valid use of the method are
violated. One widely used remedy is the method of instrumental variables (IV). For an economic
model described by more than one equation, simultaneous-equation methods may be
used to remedy similar problems, including two IV variants, Two-Stage Least
Squares (2SLS), and
Three-Stage Least Squares (3SLS).
Computational methods
Computational concerns are important for
evaluating econometric methods and for use in decision making.[15]
Such concerns include mathematical well-posedness: the existence, uniqueness,
and stability of any solutions to econometric
equations. Another concern is the numerical efficiency and accuracy of
software.[16]
A third concern is also the usability of econometric software.
Structural econometrics
Structural econometrics extends the ability of researchers
to analyze data by using economic models as the lens through which to view the
data. The benefit of this approach is that any policy recommendations are not
subject to the Lucas critique since counter-factual analyses take
an agent's re-optimization into account. Structural econometric analyses begin
with an economic model that captures the salient features of the agents under
investigation. The researcher then searches for parameters of the model that
match the outputs of the model to the data. There are two ways of doing this.
The first requires the researcher to completely solve the model and then use maximum likelihood.However, there have been many
advances that can bypass the full solution of the model and that estimate
models in two stages. Importantly, these methods allow the researcher to
consider more complicated models with strategic interactions and multiple
equilibria.
A good example of structural econometrics is in the
estimation of first price sealed bid auctions with
independent private values. The key difficulty with bidding data from these
auctions is that bids only partially reveal information on the underlying
valuations, bids shade the underlying valuations. One would like to estimate
these valuations in order to understand the magnitude of profits each bidder
makes. More importantly, it is necessary to have the valuation distribution in
hand to engage in mechanism design. In a first price sealed bid
auction the expected payoff of a bidder is given by:
Notice that the probability that a bid wins an auction can
be estimated from a data set of completed auctions, where all bids are
observed. This can be done using simple non-parametric
estimators. If all bids are observed, it is then possible to use the above
relation and the estimated probability function and its derivative to point
wise estimate the underlying valuation. This will then allow the investigator
to estimate the valuation distribution.
Example
This example assumes that the natural
logarithm of a person's wage is a linear function of the number of years of
education that person has acquired. The parameter measures the increase in the
natural log of the wage attributable to one more year of education. The term
is a random variable representing all other factors that may
have direct influence on wage. The econometric goal is to estimate the
parameters, under specific assumptions about the random variable
. For example, if
is uncorrelated with years of education, then the equation can
be estimated with ordinary least squares.
If the researcher could randomly assign people to different
levels of education, the data set thus generated would allow estimation of the
effect of changes in years of education on wages. In reality, those experiments
cannot be conducted. Instead, the econometrician observes the years of
education of and the wages paid to people who differ along many dimensions.
Given this kind of data, the estimated coefficient on Years of Education in the
equation above reflects both the effect of education on wages and the effect of
other variables on wages, if those other variables were correlated with
education. For example, people born in certain places may have higher wages and
higher levels of education. Unless the econometrician controls for place of
birth in the above equation, the effect of birthplace on wages may be falsely
attributed to the effect of education on wages.
The most obvious way to control for birthplace is to include
a measure of the effect of birthplace in the equation above. Exclusion of
birthplace, together with the assumption that
is uncorrelated with education produces a misspecified model.
Another technique is to include in the equation additional set of measured
covariates which are not instrumental variables, yet render identifiable. An
overview of econometric methods used to study this problem were provided by Card
(1999).
Journals
The main journals which publish work in econometrics are Econometrica,
the Journal of Econometrics, the Review of Economics and
Statistics, Econometric Theory, the Journal of Applied Econometrics,
Econometric Reviews, the Econometrics Journal, Applied
Econometrics and International Development, the Journal of Business
& Economic Statistics, and the Journal of Economic and
Social Measurement.
Limitations and criticisms
Like other forms of statistical analysis, badly specified econometric
models may show a spurious correlation where two variables are
correlated but causally unrelated. In a study of the use of econometrics in
major economics journals, McCloskey
concluded that economists report p values (following the Fisherian
tradition of tests of significance of point null-hypotheses),
neglecting concerns of type II errors; economists fail to report estimates
of the size of effects (apart from statistical significance) and to discuss
their economic importance. Economists also fail to use economic reasoning for model
selection, especially for deciding which variables to include in a
regression.
In some cases, economic variables cannot be experimentally
manipulated as treatments randomly assigned to subjects. In such cases,
economists rely on observational studies, often using data sets
with many strongly associated covariates, resulting in enormous numbers of models with
similar explanatory ability but different covariates and regression estimates.
Regarding the plurality of models compatible with observational data-sets, Edward
Leamer urged that "professionals ... properly withhold belief until an
inference can be shown to be adequately insensitive to the choice of
assumptions".
Economists from the Austrian
School argue that aggregate economic models are not well suited to describe
economic reality because they waste a large part of specific knowledge. Friedrich
Hayek in his The Use of Knowledge in Society
argued that "knowledge of the particular circumstances of time and
place" is not easily aggregated and is often ignored by professional
economists.
SUBSCRIBERS - ( LINKS) :FOLLOW / REF / 2 /
findleverage.blogspot.com
Krkz77@yahoo.com
+234-81-83195664
No comments:
Post a Comment