You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently dbx.filesListFolderLongpoll only just adds an entry to the filelist property. In fact we need to create a queue, prioritizing "folder", "file" and "deleted" in this order and then processing everything by updating the filelist entry, adding a new one or deleting and existing one and then handling this entry by downloading, moving or deleting the file.
The ServerFilelistWorker should store the queue, add entries to it and detect moves/renames, MergeWorker should consume the queue and manage a queue for itself containing actions for the download worker, upload worker and filesystem worker (which will do moves, renames and copies). It has to explicitely calculate and store move and rename events, download events, copy events, delete events and upload events.
One consequence might be, that we massively need to re-factor the whole architecture to some other concept. We in fact need more workers and maybe one single master class handling all workers and queues. This requires some conceptional design and is yet to be done.
The text was updated successfully, but these errors were encountered:
Nodebox: main handler class; provides normalized access to storage using StorageInterface and handles everything
StorageInterface: basic Storage interface base class
CloudStorageInterface: additions for cloud storage providers
FilesystemStorageInterface: additions for interacting with filesystem resources
StorageWorker: one instance each uses FilesystemStorageInterface or CloudStorageInterface to build filelists and subscribe to changes
StorageWatcher: one instance each is used by StorageWorker to watch each storage system for changes and queue them in a normalized format
(static) EventResolver: resolves file moves, copies etc. to a new event (e.g. Dropbox provides moves as "one new, and one deleted" file, this class returns "one moved file"); takes a Provider object to correctly resolve api events from each cloud storage provider
MessageQueue: base class for queueing storage changes for later processing; used by StorageWatcher
TransferWorker: basic Upload/Download base class
UploadWorker: uses FilesystemStorageInterface and CloudStorageInterface to upload files and create folders remotely
DownloadWorker: uses FilesystemStorageInterface and CloudStorageInterface to download files and create folders locally
(static) CacheInterface: provides easy access to cache
DatabaseInterface: provides easy access to config file and filelist database
MetadataInterface: uses DatabaseInterface to provide easy access to file metadata from the database
FilelistInterface: provides easy access to local and server file list, such as searching by hash, selecting by path, adding something etc.; one instance each
(static) VersionHandler: e.g. used by TransferWorker to resolve versions and handle conflicts
ErrorHandler: used by everything to handle Errors
LogHandler: used by everything to handle logging
EventEmitter
ConfigInterface: provides easy access to config file
CloudStorageInterface takes a Provider object, like DropboxProvider (or GoogleDriveProvider), to implement a standard interface for interacting with various storage providers.
Add all local files to local mq and index them, when timestamps from fs.stat() and database differ or they are not indexed yet
Download a file list and subscribe to updates in the meantime; add all entries from initial file list to server mq and run subscribed updates through EventResolver and then add the results to mq
Calculate actions from both lists detecting what to download, move or create and what to upload (resolving conflicts) and create a new mq for both DowloadWorker and `UploadWorker to consume
Currently
dbx.filesListFolderLongpoll
only just adds an entry to thefilelist
property. In fact we need to create a queue, prioritizing "folder", "file" and "deleted" in this order and then processing everything by updating the filelist entry, adding a new one or deleting and existing one and then handling this entry by downloading, moving or deleting the file.The
ServerFilelistWorker
should store the queue, add entries to it and detect moves/renames,MergeWorker
should consume the queue and manage a queue for itself containing actions for the download worker, upload worker and filesystem worker (which will do moves, renames and copies). It has to explicitely calculate and store move and rename events, download events, copy events, delete events and upload events.One consequence might be, that we massively need to re-factor the whole architecture to some other concept. We in fact need more workers and maybe one single master class handling all workers and queues. This requires some conceptional design and is yet to be done.
The text was updated successfully, but these errors were encountered: