RTC Forums
May 14, 2024, 01:15:58 AM *
Welcome, Guest. Please login or register.

Login with username, password and session length
 
   Home   Help Login Register  
Pages: [1]
  Print  
Author Topic: fetch on demand and smartinspect logging by session  (Read 5033 times)
lionheart
Newbie
*
Posts: 18


« on: January 04, 2012, 02:48:12 AM »

Happy new year..

Sorry for the delay in upgrading my subscription because i was in holiday mood. I have just done it and back to reality.

(1) Is there any suggestion to implement fetch-on-demand data (like clientdataset packetRecord)?
What i am testing out now is to use the "SELECT FIRST m SKIP" (Firebird) to return limited rows. In the server i have a TList<> variable in the user object to store the number of rows that has been fetched and the operation ID, the client will issue the call with the operationID again when a user needs further data (like last page of a grid). Find/Locate for data breaks the logic though and i am still thinking of a way to do it Tongue

(2) In smartinspect we can have logging session for each client connection. It need a string to differentiate each logging session
Quote
clientX := Si.AddSession('Client X');
clientX.Color := clGreen;
clientX.LogMessage('blah blah..');
I have a unique string in my user object and using the way as above. It is working, however i am thinking it will be better if the logging can be by client connection or session base. Sort of when httpserver.clientconnected, httpserver.disconnected, SrvModule.opensession, SrvModule.closesession the smartinspect log session will be created. I hv tried to use Session.asText to store the log unique id however because the server is stateless, the session always changed (open/close) n for the same client many log session will be created due to that. I don't want to change the server to non-stateless (extend the session live to unlimited) and well appreaciate any suggestion from the guru.
Logged
D.Tkalcec (RTC)
Administrator
*****
Posts: 1881


« Reply #1 on: January 04, 2012, 03:52:13 AM »

1) I don't know of a reliable solution for implementing "fetch-on-demand" on live data when working with a stateless Server. If you try doing this the way you have described it, unless you always have only one person working on your data, you will inevitably end up with duplicate or missing rows on your client side. Duplicate rows when a new row is inserted by another user between two fetch operations, or a missing row if a row is deleted by another user somewhere before the location you are about to fetch. The only reliable way to implement "fetch-on-demand" would be to use a stateful Server with a live connection to the database, but I think this would be a huge waste of resources and wouldn't recommend it to anyone.

2) I think the best solution is the one that works. And since you have said that you already have a working solution, I see no reason to change it. Especially if a different implementation would require changing your Server-side code from a stateless implementation to a stateful one.

Best Regards,
Danijel Tkalcec
Logged
Kevin Powick
RTC Expired
*
Posts: 87


« Reply #2 on: January 17, 2012, 11:48:26 PM »

(1) Is there any suggestion to implement fetch-on-demand data (like clientdataset packetRecord)?
What i am testing out now is to use the "SELECT FIRST m SKIP" (Firebird) to return limited rows.

I hope this reply isn't too late, but I've had to implement the type of fetch on demand that you're talking about.  As Danijel mentioned in his own reply, you don't want to store state on the server side, but you're going to have to store it somewhere; In this case, the client.

If you can work with data in terms of "pages", then all you have to do is have the client "tell" the server which "page" of data that it wants.  A page represents a number of rows of data.  If you specify the number of rows that makes up a page (lines per page (LPP)), you can easily calculate the OFFSET (PostgreSQL).  I guess this would be SKIP in Firebird?

If I had a situation where I want my clients to fetch 20 rows of data at a time, and I wanted the data from the 5th page, I could do the following (Note: PostgreSQL syntax):

LPP  := 20;
PageNo := 5

SQL := 'SELECT * FROM MyTable LIMIT %d OFFSET %d';
SQL := Format(SQL, [LPP, (LPP * (PageNo - 1))]);

I hope this helps.

Kevin
Logged

Linux is only free if your time is worthless
D.Tkalcec (RTC)
Administrator
*****
Posts: 1881


« Reply #3 on: January 18, 2012, 02:55:02 AM »

Hi Kevin and thank you for your suggestion.

Now I think that I've misunderstood the original question, because your solution makes perfect sense. It is very similar to how web applications work with large datasets and can be used with rich clients as well, provided the Client does NOT try to combine multiple "pages" into a single continuous dataset, but (just like a web application would) shows the user a single "page" at a time.

Since there is no need to keep a separate live cursor open to the database for each Client holding the result from the SQL, the Server also doesn't need to remember the state of each client (Clients can do that by themselves).

Best Regards,
Danijel Tkalcec
Logged
Kevin Powick
RTC Expired
*
Posts: 87


« Reply #4 on: January 18, 2012, 03:08:19 AM »

...It is very similar to how web applications work with large datasets

Yes, this is exactly the case in which we use it.  We needed a way to page through large datasets in a web application.

Quote
and can be used with rich clients as well, provided the Client does NOT try to combine multiple "pages" into a single continuous dataset, but (just like a web application would) shows the user a single "page" at a time.

Right.  In many of our own client/server scenarios, clients are non-Delphi, without the convenience of a dataset.  So, updates happen via SQL on the server using the changed, individual row data received the client.

--
Kevin Powick
Logged

Linux is only free if your time is worthless
lionheart
Newbie
*
Posts: 18


« Reply #5 on: January 18, 2012, 05:05:12 PM »

Kevin,

It is never late for replies Smiley. Yes, the equivalent of LIMIT OFFSET in Firebird is FIRST..SKIP. Thanks for sharing your idea, the only difference (actually not much of difference) is i persist the LPP and PageNo data at the server in a user object which is created in a Login remote function, some of the info like user firstname, lastname etc will be return back, particularly the most significant one is the unique id to identify the user object at the server. Each time a client needs data it will send an OperationID and the unique id to the server, the unique id is used by the server to identify which user object to read the LPP/Pageno and create the query or call stored procedure to retrieve the data from Firebird. For the locate/Find, i just show the found data row as the first row in the grid Tongue and fill the subsequent row (if any) <= LPP - 1.

Danijel,
I believe i have not explained clearly what i wanted, my excuse is english is not my mother tongue  Wink. As always thanks for you help.
Logged
Kevin Powick
RTC Expired
*
Posts: 87


« Reply #6 on: January 18, 2012, 05:20:45 PM »

... i persist the LPP and PageNo data at the server in a user object which is created in a Login remote function

Understood, but this creates a situation where you are maintaining state on the server.  In terms of scalability, this is usually not desired.  However, scalability is often not a major issue for non-public services, so I understand why people sometimes choose to avoid some of the extra hassle that a stateless design can entail.

--
Kevin Powick
Logged

Linux is only free if your time is worthless
lionheart
Newbie
*
Posts: 18


« Reply #7 on: January 18, 2012, 06:11:09 PM »

Kevin,

I have second thought, it is not much of hassle to move the user object maintains in the server to the client. Just need to send the relevant information to the server on each client call. All my client call is sending an RtcArray, and no harm to add a RtcRecord as an element inside to pass the relevant information.

Thanks, i will do that now Smiley..
Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!
Page created in 0.027 seconds with 17 queries.