1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
 
*  You have 2 dependent variables X2 and x3
   You have 1 independent variable x1 
   All are interval variables
   You want to know if the regression coefficent between x1 and X2 is
   significantly larger then the coefficient between x1 and x3.
 
* If you can assume that the regressions are independent, then you can simply 
  regress X2 and x3 on x1 and calculate the difference between the two regression
  coefficients, then divide this by the square root of the sum of the squared
  standard errors, and under normal theory assumptions you have a t-statistic 
  with N-2 degrees of freedom. 

* In general, you would not be able to assume independence, so you would need
  to subtract twice the covariance of the two estimated regression coefficients
  from the sum of their squared standard errors in order to get the correct
  estimated variance and then standard error of the difference.
 
* MANOVA will do the regressions and provide one of the pieces needed to
  get the full covariance matrix of the estimated regression coefficients
  (what I'm calling sigma here, which is the estimated error or residual
  covariance matrix of the dependent variables), but to get the other (the
  inverse of the x'x matrix) you'd have to compute a constant variable,
  run a REGRESSION through the origin, print the covariance matrix of the
  parameter estimates and then divide out the residual mean square to get
  the inverse x'x matrix  This is obviously a bit tedious, so here's a
  solution using the MATRIX procedure:.
 
compute con=1.
matrix.
get x3 /var=con x1.
get x2 /var=x3 x2.
compute b=inv(t(x3)*x3)*t(x3)*x2.
compute xtx=inv(t(x3)*x3).
compute dfe=nrow(x3)-ncol(x3).
compute sigma=(t(x2)*x2-t(b)*t(x3)*x2)/dfe.
compute bcov=kroneker(sigma,xtx).
compute b={b(:,1);b(:,2)}.
compute c={0,1,0,-1}.
compute tstat=c*b/sqrt(c*bcov*t(c)).
compute ttmp=abs(tstat).
compute pvalue=2*(1-tcdf(ttmp,dfe)).
print tstat.
print pvalue.
end matrix.
 
 * You create a column of 1's prior to entering the MATRIX procedure, to
   represent the constant in the regression model Inside MATRIX You use the
   GET commands to define the right (x) and left (x2) sides of the your
   equation Then You use standard formulas for OLS estimates of the
   regression coefficients, df and covariance matrix of the estimates The
   matrix of regression coefficients needs to be made into a vector, so we
   do that Then we set up a contrast vector (c), with coefficients appropriate
   to compare the two slope coefficients, and use this in standard formulas
   to produce a t-statistic you can of course print more things than I've
   chosen to print Note that the t-statistic printed here is actually for
   the standard two-sided test The original question was perhaps referring
   to a directional hypothesis, in which case this would be handled a bit
   differently That is, we'd simply stop if the difference was negative,
   and if positive we wouldn't double the 1-tcdf result.