pysal.model.spreg.GM_Endog_Error

class pysal.model.spreg.GM_Endog_Error(y, x, yend, q, w, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None)[source]

GMM method for a spatial error model with endogenous variables, with results and diagnostics; based on Kelejian and Prucha (1998, 1999) [Kelejian1998] [Kelejian1999].

Parameters:
y : array

nx1 array for dependent variable

x : array

Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant

yend : array

Two dimensional array with n rows and one column for each endogenous variable

q : array

Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (note: this should not contain any variables from x)

w : pysal W object

Spatial weights object (always needed)

vm : boolean

If True, include variance-covariance matrix in summary results

name_y : string

Name of dependent variable for use in output

name_x : list of strings

Names of independent variables for use in output

name_yend : list of strings

Names of endogenous variables for use in output

name_q : list of strings

Names of instruments for use in output

name_w : string

Name of weights matrix for use in output

name_ds : string

Name of dataset for use in output

Examples

We first need to import the needed modules, namely numpy to convert the data we read into arrays that spreg understands and pysal to perform all the analysis.

>>> import pysal.lib
>>> import numpy as np

Open data on Columbus neighborhood crime (49 areas) using pysal.lib.io.open(). This is the DBF associated with the Columbus shapefile. Note that pysal.lib.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.

>>> dbf = pysal.lib.io.open(pysal.lib.examples.get_path("columbus.dbf"),'r')

Extract the CRIME column (crime rates) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.

>>> y = np.array([dbf.by_col('CRIME')]).T

Extract INC (income) vector from the DBF to be used as independent variables in the regression. Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this model adds a vector of ones to the independent variables passed in.

>>> x = np.array([dbf.by_col('INC')]).T

In this case we consider HOVAL (home value) is an endogenous regressor. We tell the model that this is so by passing it in a different parameter from the exogenous variables (x).

>>> yend = np.array([dbf.by_col('HOVAL')]).T

Because we have endogenous variables, to obtain a correct estimate of the model, we need to instrument for HOVAL. We use DISCBD (distance to the CBD) for this and hence put it in the instruments parameter, ‘q’.

>>> q = np.array([dbf.by_col('DISCBD')]).T

Since we want to run a spatial error model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations into the error component of the model. To do that, we can open an already existing gal file or create a new one. In this case, we will use columbus.gal, which contains contiguity relationships between the observations in the Columbus dataset we are using throughout this example. Note that, in order to read the file, not only to open it, we need to append ‘.read()’ at the end of the command.

>>> w = pysal.lib.io.open(pysal.lib.examples.get_path("columbus.gal"), 'r').read() 

Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, this allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:

>>> w.transform='r'

We are all set with the preliminars, we are good to run the model. In this case, we will need the variables (exogenous and endogenous), the instruments and the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.

>>> model = GM_Endog_Error(y, x, yend, q, w=w, name_x=['inc'], name_y='crime', name_yend=['hoval'], name_q=['discbd'], name_ds='columbus')

Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them. Note that because we are running the classical GMM error model from 1998/99, the spatial parameter is obtained as a point estimate, so although you get a value for it (there are for coefficients under model.betas), you cannot perform inference on it (there are only three values in model.se_betas). Also, this regression uses a two stage least squares estimation method that accounts for the endogeneity created by the endogenous variables included.

>>> print model.name_z
['CONSTANT', 'inc', 'hoval', 'lambda']
>>> np.around(model.betas, decimals=4)
array([[ 82.573 ],
       [  0.581 ],
       [ -1.4481],
       [  0.3499]])
>>> np.around(model.std_err, decimals=4)
array([ 16.1381,   1.3545,   0.7862])
Attributes:
summary : string

Summary of regression results and diagnostics (note: use in conjunction with the print command)

betas : array

kx1 array of estimated coefficients

u : array

nx1 array of residuals

e_filtered : array

nx1 array of spatially filtered residuals

predy : array

nx1 array of predicted y values

n : integer

Number of observations

k : integer

Number of variables for which coefficients are estimated (including the constant)

y : array

nx1 array for dependent variable

x : array

Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant

yend : array

Two dimensional array with n rows and one column for each endogenous variable

z : array

nxk array of variables (combination of x and yend)

mean_y : float

Mean of dependent variable

std_y : float

Standard deviation of dependent variable

vm : array

Variance covariance matrix (kxk)

pr2 : float

Pseudo R squared (squared correlation between y and ypred)

sig2 : float

Sigma squared used in computations

std_err : array

1xk array of standard errors of the betas

z_stat : list of tuples

z statistic; each tuple contains the pair (statistic, p-value), where each is a float

name_y : string

Name of dependent variable for use in output

name_x : list of strings

Names of independent variables for use in output

name_yend : list of strings

Names of endogenous variables for use in output

name_z : list of strings

Names of exogenous and endogenous variables for use in output

name_q : list of strings

Names of external instruments

name_h : list of strings

Names of all instruments used in ouput

name_w : string

Name of weights matrix for use in output

name_ds : string

Name of dataset for use in output

title : string

Name of the regression method used

__init__(y, x, yend, q, w, vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

__init__(y, x, yend, q, w[, vm, name_y, …]) Initialize self.

Attributes

mean_y
std_y