If one wants to work with multiple databases and they all have the same structure, then one possibility is to store all the files in a folder, read the files in that folder, and iterate and join all the datasets.
path="E:/folder1/folder2/input"
setwd(path)
Now we are going to create an empty dataframe to insert the data that we are going to obtain from the other dataframes. For this, it is necessary to have done an exploratory analysis of the other dataframes beforehand and thus be able to know their variables.
df <- data.frame(column_string=character(),
column_string2=character(),
column_int=numeric(),
stringsAsFactors=FALSE)
Now we are going to create a vector that has all the files in our directory and then iterate over that vector.
files<-list.files(path)
for (file in files){
df2<-read.csv(paste0(path,"input/",file) )
##### Make some kind of modification on the dataframes and then bind
df<-rbind(df,df2)
}
Here we assume that our datasets are .csv, in addition we also assume that they all have the same structure, if they have different structures then it will not be possible to join them all with the same loop.