2: Income Effect on Motorization#
Model 2 posits that the effect of income (relative to drive alone) is the same for both shared ride modes and transit but is different for the other modes. This is represented in the model by constraining the income coefficients in both shared ride modes and the transit mode to be equal. (pp. 108)
import larch as lx
lx.__version__
'6.0.37.dev32+gdab7641'
This example is a mode choice model built using the MTC example dataset. First we create the DB and Model objects:
d = lx.examples.MTC(format="dataset")
d
<xarray.Dataset> Size: 2MB Dimensions: (caseid: 5029, altid: 6) Coordinates: * caseid (caseid) int64 40kB 1 2 3 4 5 6 ... 5024 5025 5026 5027 5028 5029 * altid (altid) int64 48B 1 2 3 4 5 6 alt_names (altid) <U7 168B 'DA' 'SR2' 'SR3+' 'Transit' 'Bike' 'Walk' Data variables: (12/38) chose (caseid, altid) float32 121kB 1.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 ivtt (caseid, altid) float64 241kB 13.38 18.38 20.38 ... 1.59 6.55 0.0 ovtt (caseid, altid) float64 241kB 2.0 2.0 2.0 15.2 ... 16.0 4.5 0.0 tottime (caseid, altid) float64 241kB 15.38 20.38 22.38 ... 11.05 19.1 totcost (caseid, altid) float64 241kB 70.63 35.32 20.18 ... 75.0 0.0 0.0 hhid (caseid) int64 40kB 2 3 5 6 8 8 ... 9429 9430 9433 9434 9436 9438 ... ... corredis (caseid) int64 40kB 0 1 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0 vehbywrk (caseid) float64 40kB 4.0 1.0 0.33 1.0 0.0 ... 2.0 2.0 3.0 3.0 vocc (caseid) int64 40kB 1 0 1 0 2 0 1 1 1 1 0 ... 1 2 1 1 0 1 2 1 1 1 wgt (caseid) int64 40kB 1 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1 _avail_ (caseid, altid) int8 30kB 1 1 1 1 1 0 1 1 1 ... 1 0 1 1 1 1 1 1 1 avail (caseid, altid) int8 30kB 1 1 1 1 1 0 1 1 1 ... 1 0 1 1 1 1 1 1 1 Attributes: _caseid_: caseid _altid_: altid
m = lx.Model(d, compute_engine="numba")
Then we can build up the utility function. We’ll use some :ref:idco data first, using the Model.utility.co attribute. This attribute is a dict-like object, to which we can assign :class:LinearFunction objects for each alternative code.
from larch import PX, P, X
m.utility_co[2] = P("ASC_SR2") + P("hhinc#Moto") * X("hhinc")
m.utility_co[3] = P("ASC_SR3P") + P("hhinc#Moto") * X("hhinc")
m.utility_co[4] = P("ASC_TRAN") + P("hhinc#Moto") * X("hhinc")
m.utility_co[5] = P("ASC_BIKE") + P("hhinc#5") * X("hhinc")
m.utility_co[6] = P("ASC_WALK") + P("hhinc#6") * X("hhinc")
Next we’ll use some idca data, with the utility_ca attribute. This attribute is only a single :class:LinearFunction that is applied across all alternatives using :ref:idca data. Because the data is structured to vary across alternatives, the parameters (and thus the structure of the :class:LinearFunction) does not need to vary across alternatives.
m.utility_ca = PX("tottime") + PX("totcost")
Lastly, we need to identify idca Format data that gives the availability for each alternative, as well as the number of times each alternative is chosen. (In traditional discrete choice analysis, this is often 0 or 1, but it need not be binary, or even integral.)
m.availability_ca_var = "avail"
m.choice_ca_var = "chose"
And let’s give our model a descriptive title.
m.title = "MTC Example 2, Motorized"
We can view a summary of the choices and alternative availabilities to make sure the model is set up correctly.
m.choice_avail_summary()
name | chosen | available | |
---|---|---|---|
1 | DA | 3637 | 4755 |
2 | SR2 | 517 | 5029 |
3 | SR3+ | 161 | 5029 |
4 | Transit | 498 | 4003 |
5 | Bike | 50 | 1738 |
6 | Walk | 166 | 1479 |
< Total All Alternatives > | 5029 | <NA> |
We’ll set a parameter cap (bound) at +/- 20, which helps improve the numerical stability of the optimization algorithm used in estimation.
m.set_cap(20)
Having created this model, we can then estimate it:
assert m.compute_engine == "numba"
result = m.maximize_loglike(stderr=True)
m.calculate_parameter_covariance()
m.loglike()
Iteration 043 [Optimization terminated successfully]
Best LL = -3628.2862250201597
value | best | initvalue | minimum | maximum | nullvalue | holdfast | |
---|---|---|---|---|---|---|---|
param_name | |||||||
ASC_BIKE | -2.390508 | -2.390508 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
ASC_SR2 | -2.135835 | -2.135835 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
ASC_SR3P | -3.530700 | -3.530700 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
ASC_TRAN | -0.796784 | -0.796784 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
ASC_WALK | -0.224385 | -0.224385 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
hhinc#5 | -0.012457 | -0.012457 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
hhinc#6 | -0.009261 | -0.009261 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
hhinc#Moto | -0.002875 | -0.002875 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
totcost | -0.004899 | -0.004899 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
tottime | -0.051490 | -0.051490 | 0.0 | -20.0 | 20.0 | 0.0 | 0 |
np.float64(-3628.2862250201597)
m.parameter_summary()
Value | Std Err | t Stat | Signif | Null Value | |
---|---|---|---|---|---|
Parameter | |||||
ASC_BIKE | -2.39 | 0.304 | -7.86 | *** | 0.00 |
ASC_SR2 | -2.14 | 0.0884 | -24.17 | *** | 0.00 |
ASC_SR3P | -3.53 | 0.115 | -30.63 | *** | 0.00 |
ASC_TRAN | -0.797 | 0.112 | -7.09 | *** | 0.00 |
ASC_WALK | -0.224 | 0.193 | -1.16 | 0.00 | |
hhinc#5 | -0.0125 | 0.00531 | -2.34 | * | 0.00 |
hhinc#6 | -0.00926 | 0.00301 | -3.07 | ** | 0.00 |
hhinc#Moto | -0.00287 | 0.00122 | -2.35 | * | 0.00 |
totcost | -0.00490 | 0.000238 | -20.56 | *** | 0.00 |
tottime | -0.0515 | 0.00310 | -16.63 | *** | 0.00 |
We can use the ordering
attribute to fix the ordering of parameters
and group them systematically:
m.ordering = (
(
"LOS",
"totcost",
"tottime",
),
(
"ASCs",
"ASC.*",
),
(
"Income",
"hhinc.*",
),
)
m.parameter_summary()
Value | Std Err | t Stat | Signif | Null Value | ||
---|---|---|---|---|---|---|
Category | Parameter | |||||
LOS | totcost | -0.00490 | 0.000238 | -20.56 | *** | 0.00 |
tottime | -0.0515 | 0.00310 | -16.63 | *** | 0.00 | |
ASCs | ASC_BIKE | -2.39 | 0.304 | -7.86 | *** | 0.00 |
ASC_SR2 | -2.14 | 0.0884 | -24.17 | *** | 0.00 | |
ASC_SR3P | -3.53 | 0.115 | -30.63 | *** | 0.00 | |
ASC_TRAN | -0.797 | 0.112 | -7.09 | *** | 0.00 | |
ASC_WALK | -0.224 | 0.193 | -1.16 | 0.00 | ||
Income | hhinc#5 | -0.0125 | 0.00531 | -2.34 | * | 0.00 |
hhinc#6 | -0.00926 | 0.00301 | -3.07 | ** | 0.00 | |
hhinc#Moto | -0.00287 | 0.00122 | -2.35 | * | 0.00 |
Finally, let’s print model statistics. Note that if you want LL at constants you need to run a separate model.
m.estimation_statistics()
Statistic | Aggregate | Per Case |
---|---|---|
Number of Cases | 5029 | |
Log Likelihood at Convergence | -3628.29 | -0.72 |
Log Likelihood at Null Parameters | -7309.60 | -1.45 |
Rho Squared w.r.t. Null Parameters | 0.504 |