You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I'm working with @aedobbyn who's contributed here too, and we just ran into a problem with this package I think because of transitioning to R 4.3?, I'm not totally sure yet what caused this, but some workflow we have that uses this package just started to fail. The problem I found is in the covid19mobility::read_google_mobility() function. In R 4.2 it seems to run fine, but in R 4.3 it fails in the call to tidyr::pivot_longer at https://github.com/Covid19R/covid19mobility/blob/master/R/refresh_covid19mobility_google.R#L292-L319
The readr::read_csv call works fine to create the ~11 million row data.frame, but then the tidyr::pivot_longer fails and crashes R. It runs fine when there's some smaller set of rows passed to it, e.g., 2 million, or 5 million, etc. But with I found ~9 million or more rows, then tidyr::pivot_longer kills R.
I haven't yet dug into a debugger, so I don't know if this is a tidyr issue or something below it.
Since it seems like it's about the size of the data, perhaps one could chunk the data and pass smaller chunks of data to tidyr::pivot_longer and then recombine them?
The text was updated successfully, but these errors were encountered:
👋🏽 @jebyrnes
So I'm working with @aedobbyn who's contributed here too, and we just ran into a problem with this package I think because of transitioning to R 4.3?, I'm not totally sure yet what caused this, but some workflow we have that uses this package just started to fail. The problem I found is in the
covid19mobility::read_google_mobility()
function. In R 4.2 it seems to run fine, but in R 4.3 it fails in the call totidyr::pivot_longer
at https://github.com/Covid19R/covid19mobility/blob/master/R/refresh_covid19mobility_google.R#L292-L319The
readr::read_csv
call works fine to create the ~11 million row data.frame, but then thetidyr::pivot_longer
fails and crashes R. It runs fine when there's some smaller set of rows passed to it, e.g., 2 million, or 5 million, etc. But with I found ~9 million or more rows, thentidyr::pivot_longer
kills R.I haven't yet dug into a debugger, so I don't know if this is a
tidyr
issue or something below it.Since it seems like it's about the size of the data, perhaps one could chunk the data and pass smaller chunks of data to
tidyr::pivot_longer
and then recombine them?The text was updated successfully, but these errors were encountered: