I am trying to use loops to streamline some code so that I can analyze many sets of data within a folder without having to write new code for each set.
The goal of this code is to load in each .csv file from a folder into its own dataframe, named accordingly.
#Define the path to the folder containing the data sets
folder <- "Volumes/DataHD/Folder/"
#Make a list of the files within that folder
files <- list.files(path = folder)
#Define the desired names of each dataframe
names <- c("A1", "A2", "A3")
#Set working directory to that folder
setwd(folder)
#Use a for loop to load each .csv file into its own dataframe
i = 1
for(theFile in files){
name[i] <- read.csv(theFile)
i = i + 1
}
However, rather than creating three dataframes with the names "A1", "A2", and "A3", this code just changes the contents of the "names" list so that each object in the list is one of my desired dataframes.
I realize now that these attempted work arounds were foolish, but I have also tried:
i = 1
for(theFile in files){
toString(name[i]) <- read.csv(theFile)
i = i + 1
}
which gives the error "could not find function "toString<-"". And:
i = 1
for(theFile in files){
c <- toString(name[i])
c <- read.csv(theFile)
i = i + 1
}
which just changes c into a dataframe. Historically, I would just do something like:
"A1" <- read.csv("Volumes/DataHD/Folder/LongNameA1.csv"
"A2" <- read.csv("Volumes/DataHD/Folder/LongNameA2.csv"
"A3" <- read.csv("Volumes/DataHD/Folder/LongNameA3.csv"
But the actual scenario involves many sets of data and having to constantly retype or copy paste is exactly what I am trying to avoid. Is there any way to accomplish what I'm trying to do? Or should I take a totally different approach and try to tackle it with arrays of some kind?
Edit: Each desired dataframe has a different number of rows, just in case that effects your advice.