Hello,
I just posted a question that is related to this on ResearchGate and the Google MaxEnt group so sorry for the barrage of questions. I am trying to generate some ENMs for bats of North America, 33 species, and I am using the ENMevaluate function to tune the models. I am using maxent.jar and block folds with the background partitioned as well. From what I can tell, when I then filter through and select the optimal fc and rm combination, looking at the prediction for that model and other things like the variable importances, those are from a whole model - no folds and all my occurrences and background points. Is that right? I can see that the metrics like AUC, CBI, etc. are the average over the folds, but the model is the whole model. Is the idea that the cross-validation in the tuning portion serves as the validation metric? Just trying to see what to do next after the tuning and whether to use the whole model or an average of the folds as my final model.
Anyways, this also led to an issue I noticed in my results. When I run ENMevaluate, in the end, in my output folder, it just gives me the results and details for the last fold. I've tried turning on replicates and entering a number there, but it then turns my spatial folds into randomkfolds. It may just be something in my code, but just wondering if there is a way with spatial folds for it to save the data and results from each replicate. Below is the code for that part. Thanks for taking a look!
tune.args <- list(fc=c("L","LQ","H", "LQH", "LQHP", "LQHPT"), rm=seq(1,4,0.5))
oargs <- list(path="C:/Users/makafish/Desktop/NA bats ENM final materials/Results/Final_enmeval_Results/", validation.bg="partition", other.args=c("fadebyclamping=TRUE","outputformat=cloglog", "addsamplestobackground=FALSE"))
e.mx <- ENMevaluate(occs = coords, envs = vs_for_model, bg = bg_coords,
algorithm = 'maxent.jar', partitions = 'block',
tune.args = tune.args, parallel = TRUE, other.settings = oargs,
doClamp = TRUE, raster.preds = FALSE)
Hello,
I just posted a question that is related to this on ResearchGate and the Google MaxEnt group so sorry for the barrage of questions. I am trying to generate some ENMs for bats of North America, 33 species, and I am using the ENMevaluate function to tune the models. I am using maxent.jar and block folds with the background partitioned as well. From what I can tell, when I then filter through and select the optimal fc and rm combination, looking at the prediction for that model and other things like the variable importances, those are from a whole model - no folds and all my occurrences and background points. Is that right? I can see that the metrics like AUC, CBI, etc. are the average over the folds, but the model is the whole model. Is the idea that the cross-validation in the tuning portion serves as the validation metric? Just trying to see what to do next after the tuning and whether to use the whole model or an average of the folds as my final model.
Anyways, this also led to an issue I noticed in my results. When I run ENMevaluate, in the end, in my output folder, it just gives me the results and details for the last fold. I've tried turning on replicates and entering a number there, but it then turns my spatial folds into randomkfolds. It may just be something in my code, but just wondering if there is a way with spatial folds for it to save the data and results from each replicate. Below is the code for that part. Thanks for taking a look!
tune.args <- list(fc=c("L","LQ","H", "LQH", "LQHP", "LQHPT"), rm=seq(1,4,0.5))
oargs <- list(path="C:/Users/makafish/Desktop/NA bats ENM final materials/Results/Final_enmeval_Results/", validation.bg="partition", other.args=c("fadebyclamping=TRUE","outputformat=cloglog", "addsamplestobackground=FALSE"))
e.mx <- ENMevaluate(occs = coords, envs = vs_for_model, bg = bg_coords,
algorithm = 'maxent.jar', partitions = 'block',
tune.args = tune.args, parallel = TRUE, other.settings = oargs,
doClamp = TRUE, raster.preds = FALSE)