-
Notifications
You must be signed in to change notification settings - Fork 122
PostgresqlMetadata works very-very slowly #199
Comments
Mine took 5.8 seconds on 239 tables :D (used jira database as test) Indeed, this is a problem if used to find details for a single table. But for that, general
Then resultant queries have Or do you want to use this for a different purpose? I would be very reluctant to add caching at framework level for this. Some uses, such as db migrations or admin tools in general need to have most up to date database info. It should be up to me in my application logic to dictate what is safe to assume will not change. Improved API that narrow's down query size would be better. So I will ask if the forthmentioned usage through table gateway, which minimized the query down to a specific table and allows you to get info about it from table gateway object, is enough for you. Or if you need to get metadata specifically through |
Because in my project does not use db view I solved the problem simply (custom PostgresqlMetadata):
|
That is a clever solution to one off usage like this (with no tablegateway) . Thank you for sharing! |
PostgresqlMetadata#loadConstraintData generate query:
It works very slowly: 43 rows1656 ms |
P.S. Maybe also consider lazy loading some of table properties, like constraints and columns. I feel full metadata detail queried at once will still cause huge execution plans, even if avoid Can also rework queries to be more specific to framework's purpose. For example, ZF does not actually need a large CASE statement for table type, because Metadata just does
which might as well be 'r', 'v', respectively. Anyway, looking forward to your result, if you can. |
OO OO this one is even faster!
Without a join, there are only 38 rows to work with. Annoyingly, fields needed from pg_* tables are not indexed. |
Wondering why performance on DDL is relevant here: never had
performance-specific questions in doctrine's schema tools, and the overhead
is similar...
…On 4 Jan 2017 1:54 a.m., "Sasha Alex Romanenko" ***@***.***> wrote:
OO OO this one is even faster!
SELECT
c.relname as table_name,
c.relkind as table_type
FROM pg_class c
WHERE c.relnamespace = (SELECT oid FROM pg_namespace WHERE nspname = 'public')
AND (c.relkind = 'r' OR c.relkind = 'v')
[image: table names plan comparison subquery]
<https://cloud.githubusercontent.com/assets/3820828/21628513/1a72323e-d1ee-11e6-97af-f72ca7f21191.png>
Without a join, there are only 38 rows to work with. Annoyingly, fields
needed from pg_* tables are not indexed.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#199 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAJakIAewVNk40maZmkekRUwQrF9COi7ks5rOu3ggaJpZM4LGaYi>
.
|
That's why I dismissed it at first too. DDL is either one time thing or a database admin utility (if someone is writing a custom one for whatever reason), both of which usecases have no need to be that performant. I've seen metadata used as part of page load to check whether database version is "safe" to use against what application expects, but that too was mostly for convenience during dev and is trivial to do manually. What got me worried was other features that use metadata without users realizing inadvertently running extra queries. |
Sorry, never mind about auto_increment thing. I mixed it up with sequence feature that does that. I do not THINK table gateway does that on its own, in which case purpose of automatically populating its column fields seems less important, but it still gives option to do so for whatever reason. |
While I agree that a more through analysis of query performance should be done, the query you provided worked perfectly for me. I opened #321 so others could benefit from the performance improvement. |
This repository has been closed and moved to laminas/laminas-db; a new issue has been opened at laminas/laminas-db#86. |
in the class Zend\Db\Metadata\Source\PostgresqlMetadata method loadTableNameData works very slowly.
It generate query:
query result: 438 rows 2160 ms!
Why make a selection from the database all the tables when we need all the data on a single table?
Perhaps you need to cache?
The text was updated successfully, but these errors were encountered: