RTC Forums
May 14, 2024, 06:58:42 AM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Login Register  
Pages: [1]
  Print  
Author Topic: Limiting chunk size during large file upload  (Read 3733 times)
Max Terentiev
RTC License
***
Posts: 49


« on: January 17, 2017, 08:01:48 PM »

Hi Danijel,

Please point me to right direction )

My server needs to receive large (10-1000 mb) files from clients.

I try ClientUpload demo and have question:

In this demo client send file by parts with 64 kb size, server receive parts and write them on disk. So, server needs only 64 kb memory for receiving buffer.

But what if one (or many) clients sends large files and NOT split it to 64 kb parts ? In this case server may use to much memory for receiving buffers. And may out of memory and down.

Because my server have open JSON-RPC API (and clients will access them using any http clients - from PHP, java, etc) I must limit somehow upload buffer size and show error if client sends to much data.

It's possible using some built-in RTC features and skip inventing a wheel ? Or I should implement some upload protocol like:

JSON_RPC call BeginFileUpload and get new UploadID,
HTTP POST /UploadFilePart?ID=UploadID, control part size in RtcDataProvider,
JSON_RPC call EndFileUpload(UploadID), server combine received parts to one large file

Thanks for help !
Logged
D.Tkalcec (RTC)
Administrator
*****
Posts: 1881


« Reply #1 on: January 17, 2017, 09:54:52 PM »

TCP/IP, on top of which HTTP is implemented always splits data into smaller packets when sending. TCP/IP packets are always smaller than 64 KB, with the usual MTU (maximum transmission unit) around 1500 bytes. You can google "TCP/IP" if you want to learn the details.

If you are using the TRtcDataRequest component on the Client to send data to the Server, you can keep your Clients memory requirements low by loading files in chunks, but even if you did place a 100 MB file into Clients sending buffers with a single Write/WriteEx call, your big data chunk would still be sent out in smaller packages, because of how TCP/IP works, which means that a Server would still get your file in chunks smaller than 64 KB, unless you decide to NOT call Read or ReadEx methods in every OnDataReceived event on the Server (as recommended) and wait for the entire content body to arrive before reading it. In that case, your Server would obviously need a lot more RAM to buffer all the incoming content for you, giving you the comfort of using a single Read or ReadEx call to get the entire content, at the expense of increased memory usage.

Your Servers memory requirements will also be higher if you decide to use a data format like JSON with a parser which requires the entire content to be in memory for parsing and then creates objects in memory to represent the data. But, this doesn't have anything to do with memory requirements during file transfer. If you care about your Servers memory, you should implement file transfer separately from standard functions, using raw content transfer as shown in the ClientUpload QuickStart example to accept files from custom built Clients, or use the standard multiform-post format as showin in the BrowserUpload QuickStart example to accept file uploads from Web Browsers and other Clients emulating the Upload method used by Web Browsers. This way, your Server will be safe to accept large file uploads, without running out of memory.

Best Regards,
Danijel Tkalcec
Logged
Max Terentiev
RTC License
***
Posts: 49


« Reply #2 on: January 18, 2017, 12:38:07 AM »

Very clear ! Thanks ! Fantastic support !
Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!
Page created in 0.024 seconds with 17 queries.