I need to download files from sftp server and parse them and insert to contents to the database.
我需要从sftp服务器下载文件并解析它们并将内容插入数据库。
I am currently using rCurl as follows:
我目前正在使用rCurl如下:
library(RCurl)
url<-c("sftp://data.ftp.net/incomining.data.txt")
x<-getURL(url, userpwd="<id>:<passwd>")
writeLines(x, incoming.data.txt"))
I also looked at download.file and I dont see sftp sufpport in download.file. Has anybody else done similiar work like this? Since I will be getting multiple files, I noticed that rcurl sometimes times out. I like to sftp download all the files from the sftp server first then process it. Any ideas?
我也查看了download.file,我在download.file中看不到sftp支持。还有其他人做过类似的工作吗?由于我将获得多个文件,我注意到rcurl有时会超时。我喜欢sftp首先从sftp服务器下载所有文件然后处理它。有任何想法吗?
2
It sounds like the question is "how do I avoid timeouts in rcurl?"
这听起来像是“如何避免rcurl中的超时?”
Increase the value of CURLOPT_CONNECTTIMEOUT. This is really just the same problem as Setting Curl's Timeout in PHP .
增加CURLOPT_CONNECTTIMEOUT的值。这实际上与在PHP中设置Curl的超时时间相同。
Edit, from comments below:
编辑,来自以下评论:
x<-getURL(url, userpwd="<id>:<passwd>", connecttimeout=60) // 60 seconds, e.g.
本站翻译的文章,版权归属于本站,未经许可禁止转摘,转摘请注明本文地址:http://www.silva-art.net/blog/2013/04/18/d34646eed87a8a1469c86cc24ef0aefb.html。