Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

some log-lik risks are nan in SL risk estimates #2

Open
jlstiles opened this issue Aug 23, 2017 · 5 comments
Open

some log-lik risks are nan in SL risk estimates #2

jlstiles opened this issue Aug 23, 2017 · 5 comments

Comments

@jlstiles
Copy link

Have you noticed trouble with log-lik risk estimates for the SL wrapper? I see some risks are showing up as nan but SL is still giving a coefficient so the predictions and meta-learning in SL are working.

@jlstiles jlstiles changed the title half log-lik risks are nan some log-lik risks are nan in SL risk estimates Aug 23, 2017
@benkeser
Copy link
Owner

I guess it could be giving predictions outside (0,1), which is causing the problem. Can you check that easily?

@jlstiles
Copy link
Author

jlstiles commented Aug 24, 2017 via email

@jlstiles
Copy link
Author

jlstiles commented Sep 7, 2017

Yes, it is giving predictions greater than 1

devtools::install_github("jlstiles/Simulations")
library(hal9001)
library(Simulations)

set.seed(14819)
n=200
data =gendata(n, g0=g0_linear,Q0=Q0_trig1)
X=data
X$Y=NULL
Y=data$Y
X0=X1=X
X0$A=0
X1$A=1
newdata = rbind(X,X1,X0)

time = proc.time()
halres <- hal(Y = Y, newX = newdata, X = X, family = "binomial",
                                   verbose = FALSE, parallel = FALSE)
timehal = proc.time() - time

Qk = halres$pred[1:n]
riskhal = mean(Y*log(Qk)+(1-Y)*log(1-Qk))

@benkeser
Copy link
Owner

benkeser commented Sep 7, 2017

Yes, I'm not surprised that was at the root. I'd suggest just truncating predictions for now until we work out a better solution.

@jlstiles
Copy link
Author

jlstiles commented Sep 7, 2017

On a side note, is it possible to do log-lik loss in fitting hal? Clearly we are using only sq error since predictions are going outside 0-1. I thought glmnet had that option

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants