# pysal.model.spreg.GM_Combo_Het¶

class pysal.model.spreg.GM_Combo_Het(y, x, yend=None, q=None, w=None, w_lags=1, lag_q=True, max_iter=1, epsilon=1e-05, step1c=False, inv_method='power_exp', vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None)[source]

GMM method for a spatial lag and error model with heteroskedasticity and endogenous variables, with results and diagnostics; based on Arraiz et al [Arraiz2010], following Anselin [Anselin2011].

Parameters: y : array nx1 array for dependent variable x : array Two dimensional array with n rows and one column for each independent (exogenous) variable, excluding the constant yend : array Two dimensional array with n rows and one column for each endogenous variable q : array Two dimensional array with n rows and one column for each external exogenous variable to use as instruments (note: this should not contain any variables from x) w : pysal W object Spatial weights object (always needed) w_lags : integer Orders of W to include as instruments for the spatially lagged dependent variable. For example, w_lags=1, then instruments are WX; if w_lags=2, then WX, WWX; and so on. lag_q : boolean If True, then include spatial lags of the additional instruments (q). max_iter : int Maximum number of iterations of steps 2a and 2b from Arraiz et al. Note: epsilon provides an additional stop condition. epsilon : float Minimum change in lambda required to stop iterations of steps 2a and 2b from Arraiz et al. Note: max_iter provides an additional stop condition. step1c : boolean If True, then include Step 1c from Arraiz et al. inv_method : string If “power_exp”, then compute inverse using the power expansion. If “true_inv”, then compute the true inverse. Note that true_inv will fail for large n. vm : boolean If True, include variance-covariance matrix in summary results name_y : string Name of dependent variable for use in output name_x : list of strings Names of independent variables for use in output name_yend : list of strings Names of endogenous variables for use in output name_q : list of strings Names of instruments for use in output name_w : string Name of weights matrix for use in output name_ds : string Name of dataset for use in output

Examples

We first need to import the needed modules, namely numpy to convert the data we read into arrays that spreg understands and pysal to perform all the analysis.

>>> import numpy as np
>>> import pysal.lib


Open data on Columbus neighborhood crime (49 areas) using pysal.lib.io.open(). This is the DBF associated with the Columbus shapefile. Note that pysal.lib.io.open() also reads data in CSV format; since the actual class requires data to be passed in as numpy arrays, the user can read their data in using any method.

>>> db = pysal.lib.io.open(pysal.lib.examples.get_path('columbus.dbf'),'r')


Extract the HOVAL column (home values) from the DBF file and make it the dependent variable for the regression. Note that PySAL requires this to be an numpy array of shape (n, 1) as opposed to the also common shape of (n, ) that other packages accept.

>>> y = np.array(db.by_col("HOVAL"))
>>> y = np.reshape(y, (49,1))


Extract INC (income) vector from the DBF to be used as independent variables in the regression. Note that PySAL requires this to be an nxj numpy array, where j is the number of independent variables (not including a constant). By default this class adds a vector of ones to the independent variables passed in.

>>> X = []
>>> X.append(db.by_col("INC"))
>>> X = np.array(X).T


Since we want to run a spatial error model, we need to specify the spatial weights matrix that includes the spatial configuration of the observations into the error component of the model. To do that, we can open an already existing gal file or create a new one. In this case, we will create one from columbus.shp.

>>> w = pysal.lib.weights.Rook.from_shapefile(pysal.lib.examples.get_path("columbus.shp"))


Unless there is a good reason not to do it, the weights have to be row-standardized so every row of the matrix sums to one. Among other things, his allows to interpret the spatial lag of a variable as the average value of the neighboring observations. In PySAL, this can be easily performed in the following way:

>>> w.transform = 'r'


The Combo class runs an SARAR model, that is a spatial lag+error model. In this case we will run a simple version of that, where we have the spatial effects as well as exogenous variables. Since it is a spatial model, we have to pass in the weights matrix. If we want to have the names of the variables printed in the output summary, we will have to pass them in as well, although this is optional.

>>> reg = GM_Combo_Het(y, X, w=w, step1c=True, name_y='hoval', name_x=['income'], name_ds='columbus')


Once we have run the model, we can explore a little bit the output. The regression object we have created has many attributes so take your time to discover them. This class offers an error model that explicitly accounts for heteroskedasticity and that unlike the models from spreg.error_sp, it allows for inference on the spatial parameter. Hence, we find the same number of betas as of standard errors, which we calculate taking the square root of the diagonal of the variance-covariance matrix:

>>> print reg.name_z
['CONSTANT', 'income', 'W_hoval', 'lambda']
>>> print np.around(np.hstack((reg.betas,np.sqrt(reg.vm.diagonal()).reshape(4,1))),4)
[[  9.9753  14.1435]
[  1.5742   0.374 ]
[  0.1535   0.3978]
[  0.2103   0.3924]]


This class also allows the user to run a spatial lag+error model with the extra feature of including non-spatial endogenous regressors. This means that, in addition to the spatial lag and error, we consider some of the variables on the right-hand side of the equation as endogenous and we instrument for this. As an example, we will include CRIME (crime rates) as endogenous and will instrument with DISCBD (distance to the CSB). We first need to read in the variables:

>>> yd = []
>>> yd.append(db.by_col("CRIME"))
>>> yd = np.array(yd).T
>>> q = []
>>> q.append(db.by_col("DISCBD"))
>>> q = np.array(q).T


And then we can run and explore the model analogously to the previous combo:

>>> reg = GM_Combo_Het(y, X, yd, q, w=w, step1c=True, name_x=['inc'], name_y='hoval', name_yend=['crime'], name_q=['discbd'], name_ds='columbus')
>>> print reg.name_z
['CONSTANT', 'inc', 'crime', 'W_hoval', 'lambda']
>>> print np.round(reg.betas,4)
[[ 113.9129]
[  -0.3482]
[  -1.3566]
[  -0.5766]
[   0.6561]]

Attributes: summary : string Summary of regression results and diagnostics (note: use in conjunction with the print command) betas : array kx1 array of estimated coefficients u : array nx1 array of residuals e_filtered : array nx1 array of spatially filtered residuals e_pred : array nx1 array of residuals (using reduced form) predy : array nx1 array of predicted y values predy_e : array nx1 array of predicted y values (using reduced form) n : integer Number of observations k : integer Number of variables for which coefficients are estimated (including the constant) y : array nx1 array for dependent variable x : array Two dimensional array with n rows and one column for each independent (exogenous) variable, including the constant yend : array Two dimensional array with n rows and one column for each endogenous variable q : array Two dimensional array with n rows and one column for each external exogenous variable used as instruments z : array nxk array of variables (combination of x and yend) h : array nxl array of instruments (combination of x and q) iter_stop : string Stop criterion reached during iteration of steps 2a and 2b from Arraiz et al. iteration : integer Number of iterations of steps 2a and 2b from Arraiz et al. mean_y : float Mean of dependent variable std_y : float Standard deviation of dependent variable vm : array Variance covariance matrix (kxk) pr2 : float Pseudo R squared (squared correlation between y and ypred) pr2_e : float Pseudo R squared (squared correlation between y and ypred_e (using reduced form)) std_err : array 1xk array of standard errors of the betas z_stat : list of tuples z statistic; each tuple contains the pair (statistic, p-value), where each is a float name_y : string Name of dependent variable for use in output name_x : list of strings Names of independent variables for use in output name_yend : list of strings Names of endogenous variables for use in output name_z : list of strings Names of exogenous and endogenous variables for use in output name_q : list of strings Names of external instruments name_h : list of strings Names of all instruments used in ouput name_w : string Name of weights matrix for use in output name_ds : string Name of dataset for use in output title : string Name of the regression method used hth : float H’H
__init__(y, x, yend=None, q=None, w=None, w_lags=1, lag_q=True, max_iter=1, epsilon=1e-05, step1c=False, inv_method='power_exp', vm=False, name_y=None, name_x=None, name_yend=None, name_q=None, name_w=None, name_ds=None)[source]

Initialize self. See help(type(self)) for accurate signature.

Methods

 __init__(y, x[, yend, q, w, w_lags, lag_q, …]) Initialize self.

Attributes

 mean_y std_y