Love Fuel?    Donate

FuelPHP Forums

Ask your question about FuelPHP in the appropriate forum, or help others by answering their questions.
Debugging data flow inside core request/response
  • Hi there, I have a weird request. As a bit of background as to why I want to know this, I have a small webapp that basically stores a bunch of details (server, control panel user/pass, CMS user/pass, emails we hold for clients, notes, etc.) for a bunch of sites we manage for our clients. It works fine, but rather than paginate properly I'm returning all sites to the page as a table and using a textinput at the top as a filtering field and then a small bit of JS to do the filtering. The performance of the page as you'd expect with 1,000 entries is a little choppy in the JS but nothing that'd annoy anyone. However, on a linux box in the office that's not super powerful but powerful enough for most things the page render (FuelPHP) takes >5s, and on my Macbook Pro (2.4GHz i5, 8GiB RAM) it takes about 2.5s. This is unacceptable load time so I'm trying to work out what's going wrong. First off I enabled profiling, and it appears to be just before rendering the view itself (DB query 50-100ms, no biggie there). The built-in profiling wasn't fine-grained enough to know what was going on as it appeared that it's after Request::execute end and before 'End of Fuel Execution'. So that was weird and useless to know. As such, I added a bunch more Profiler::mark calls in my views and in the index.php and it seems to be that the biggest slowdown was between Request::execute (inbuilt Profiler::mark) and the very first line of the view code (I'm using a template here and haven't checked whether it's before the first line of the template, but let's assume it is). Anyway I looked into the request building process and whatnot and I can only assume that my Models are lazy loading (obviously I see this is a good thing, although not for following execution flow!). I'm using ORM and using a simple
    $data['sites'] = Model_Site::find('all', array('order_by' => array('site' => 'asc')));
    
    (directly from scaffolding code) to get the data. The call to this appears to be pretty instant, so my guess is that this is lazily loaded. The problem is, I can't quite work out where it's finally loaded. It seems to be before I personally access it (I can comment out all lines in my view (thus never using the data) and it still lags) but the request building and sending process is a weird one. It seems that Request->execute()->response() only seems to run the controller action and lazily load the view when a __toString is called on the resulting Response object's body. (Maybe. This is where I got lost.) My question here, then, really, is what is the flow here? Also, while I know getting everything from the DB and not using pagination is a bad idea, even if I didn't code it as a table in HTML (but instead as a JS object) it'd still lag. The only way it wouldn't lag would be to only return a few at a time, and use AJAX in the autocomplete/filter field, but I like the idea of the instant updates on the page so would rather not go down the AJAX route. So let's ignore my possibly bad design choice, and I'm 99.999% certain it's the DB query but only when loading the view HTML that's the issue (like I said I put in a bunch of Profiler:mark calls even in the index.php file and I noticed the lag right before the bit where it adds the stats (presumably on the call to $response->body() within the call that sets the $response->body(), and if I comment that out, the lag suddenly happens just before $response->send(). But I then gave up working out what happens with this as (as I said above) it appears to use the magic __toString method on at least one of the classes here, and magic methods make it very difficult to follow execution flow! So can someone who knows the internals let me know whether the lag is likely to be the lazy loading of the Models from the DB call (1000ish entries from DB although not an absolute tonne of data) , and if so, what I can do to mitigate that (assuming I can't paginate for whatever reason and I need all 1000 rows. As an addendum, I did try adding a limit to that ORM::find call to 500 and the lag pretty much halve, and then down to 20 and it was unnoticeable, so I assume it is the sheer amount of rows I'm fetching. In fact I've just added a __construct method to my model (with a call to parent::_construct obviously and a Profiler::mark) and I see that that's not the source of the lag either (although is the source of about a second lag in a different bit of the code (Request::execute start and Request::execute end)). Is there something I'm missing here? It looks like it has to be something internal as it's not model hydration, nor is it the big dirty loop in the view (as I can blank out the view and still get the lag) but does appear to rely on the amount of data I'm passing around.
  • Eww, sorry about the formatting here. I dunno what's happened as I put linebreaks in, promise!
  • That is a simple query (SELECT * FROM table ORDER BY field ASC) that is executed immediately. If that takes 2,5s to complete on a resonable machine, then either your database table is huge or you're not using indexes (or both). Note that ORM has to hydrate the results and convert them into objects, so if your result is 1000 records, an array with 1000 objects need to be created. On my development box creating an ORM objects takes around 10ms, which means 1000 records = 10s. There is nothing wrong with your query, there is a lot wrong with your logic. Do not query more then you need.
  • Hi. Actually the db query is pretty quick (db table is all good, indexes in the right place, etc.) and I even checked the hydration - that's about a second of the lag but that's a second that occurs between the Request::execute start event and the Request::execute end event (this makes sense, that's where my controller will be accessing the db). My issue is that after the controller returns, but before the view is included/required/whatever the framework does to render a view, there's a gap of 2.5 or 4 seconds that I can't account for as it appears to be during some magic in the framework. Here's the profiler output:
    ...
    Fuel\Core\Request::execute Start       22.098 ms
    Fuel\Core\Request::execute End         1.140 s
    Template start                         1.141 s
    About to echo $content                 1.157 s
    View start                             5.433 s
    View end                               5.433 s
    Done echoing $content                  5.433 s
    Template end                           5.433 s
    End of Fuel Execution                  5.435 s
    

    All I've done is added <?php Profiler::mark(); ?> at the top of the template, the line just before <?php echo $content; ?>, the line just after <?php echo $content; ?>, the end of the template, the start of the view file and the end of the view file. My view file is actually currently empty (well one big PHP comment other than the two Profiler::mark calls) as I was making sure that it wasn't the big foreach loop in there causing the lag, or even the use of the data. Like I said, it's obviously related to the data, as limit 20 makes the lag disappear, but it appears to be something out of my control that's actually causing the lag. Memory usage according to PQP is 12 meg, that's fine. Database queries shows just the one, and it's 40ms, that's fine. The hydration of the models occurs between Request::execute start and Request::execute end (I tested it to make sure) and that's the reason for the 1s lag there, fine. But I can't work out the 4.5s lag in the "echo $content" line in the template. It's obviously preparing for the view. Actually I just had a thought: could it be the overzealous HTML escaping built into Fuel? I'd have thought that these objects being passed as an array of Models wouldn't be touched, but maybe I'm wrong. Although even so, I still would have thought that that'd have been done in the original View::forge in the controller action, but what do I know?
  • Yup solved it. turned auto filter off by using $this->template->content = View::forge('site/index', $data, false); in the action and the 4.5s lag went away. Now there's just the 1s lag on hydration of the ORM models. Not ideal but not a lot I can do about it as, as I said in the original post, I want to return all the rows on page load and use JS for filtering without AJAX.
  • You have to be a bit careful with ORM objects in combination with output filtering. Depending on the structure and the data, it might lead to a lot of conversions. So yes, either disable output filtering on ORM objects (disabling it globally is NOT a good idea), pass the result as array's, or use a Viewmodel to pre-process the data, and only pass to the View what data is needed for display.
  • Hmm, seems like a bit of a minefield. I guess it certainly wouldn't hurt to extract just the required data from the models and pass that to the view (the notes field is unnecessarily being passed through which is obviously also being HTML escaped). I must say I was surprised to see that Fuel had auto escaping on by default for everything passed into the view. I understand the decision (help developers not mess up) but I'd prefer an explicit escape function, but I guess that's just me. Having said that, the few times PHP itself has tried to help developers out (magic_quotes, register_globals) didn't work out too well! I'll look into the ViewModels thing. If I'm honest I'm not 100% certain on how they really fit in, but I'll have a reread of the docs and blog post. Cheers for your help!
  • It is a security feature, and therefor enabled by default. Which means you have to think and make a conscious descision to pass data to the view unencoded (and therefore potentionally insecure). It's like a firewall, which should have a default 'deny all', and not an 'allow all'. You can use Viewmodels everywhere where you can use views. The advantage is that you can abstract pre-processing code away from your controller (where it doesn't belong) and from the view (that shouldn't have any logic at all). Pre-processing could be creating an stdclass object with only the data needed in the view, instead of passing all DB data even if you don't need it. Of fetching data from a supporting table and pass it as an array so the form in the view can generate a dropdown. In adaptive application, you can add logic to select the correct viewfile based on platform or screen dimensions it has to be displayed on. You don't want this code (which has nothing to do with controlling the request, nor with interacting with business logic ) in your controller. Controllers should be lean and simple.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

In this Discussion