Interprocess in-memory filesystem in Python?


Interprocess in-memory filesystem in Python?



PyFilesystem (fs on pip) is a great library that supports in-memory filesystem creation with Python. However, I am looking to create and maintain a filesystem in Python in one process and dynamically access that filesystem in Python in another process.


fs



Here is the barebones docs for the MemoryFS class but it doesn't appear to be usable like that. It can open from a "path", but that path does not mean the same thing in two different processes. It appears they are (understandably) completely sandboxed.


MemoryFS



Is this possible in PyFS? If not, is there an alternative way in Python? If not, is there a similar cross-platform solution for a ram-disk that would function in this way?





Welcome to StackOverflow. Please read and follow the posting guidelines in the help documentation, as suggested when you created this account. On topic and how to ask apply here. StackOverflow is not a design, coding, research, or tutorial service.
– Prune
Jun 29 at 22:46





Excuse me? I don't think I did anything that went against the guidelines. I already suggested a solution that did not work, and I am looking for an alternative.
– Matthew Mage
Jun 29 at 22:50




1 Answer
1



The original PyFilesystem had tools to do just that. You could expose a filesystem via xmlrpc for example, and connect to it via an FS object.



PyFilesytem2 doesn't have such functionality. Although v2 has been designed to make implementing 'remote filesystems' much easier.



I'm not sure what your use case is, but you could store your data on an ftp server or Amazon S3. Both of which are supported by PyFilesystem. Any particular reason to want an in-memory solution?



The PyFilesystem mailing list me be a better place to brainstorm about such things.





Any plans on reimplementing that functionality? We are looking to have a MemoryFS maintained by one process, which is where downloaded files are stored, and have that be available to be accessed by others. We don't want to be redownloading the data from a server every time the access process is run. So maintaining the data on FTP or S3 is already what we are doing in a way.
– Matthew Mage
Jun 30 at 17:11





We also would like to prevent touching disk, if possible. So unfortunately your proposed solution does not fit our problem set. Thanks for the comment though!
– Matthew Mage
Jun 30 at 17:12





No immediate plans, but that's not to say it will never happen. From the sound of it, you might want to consider a caching proxy, or roll your own caching with memcached or Redis.
– Will McGugan
Jul 1 at 13:07






By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Popular posts from this blog

PySpark - SparkContext: Error initializing SparkContext File does not exist

List of Kim Possible characters

Python Tkinter Error, “Too Early to Create Image”