performance - How to find bottleneck on postgres sql join -
i have postgres db (9.3.3) table 60k records of restaurants each having address.
in order see every restaurant address bring together this:
select name,city accommodations inner bring together addresses on addresses.accommodations_id = accommodations.id
i thought wold easiest , fastest bring together possible, doesn't stop running. what's wrong here , search for. in advance.
\d list of relations schema | name | type | ----------+---------------------+---------- public | cities | table | public | cities_id_seq | sequence | public | geography_columns | view | public | geometry_columns | view | public | locations_id_seq | sequence | public | raster_columns | view | public | raster_overviews | view | public | schema_migrations | table | public | spatial_ref_sys | table | topology | layer | table | topology | topology | table | topology | topology_id_seq | sequence | (17 rows) "public.accommodations";"8232 kb" "public.addresses";"19 mb" "merge left bring together (cost=0.75..6748.33 rows=68647 width=674) (actual time=0.022..102.891 rows=66249 loops=1)" " merge cond: (accommodations.id = addresses.accommodation_id)" " buffers: shared hit=9167" " -> index scan using accommodations_pkey on accommodations (cost=0.29..2377.71 rows=68647 width=560) (actual time=0.010..17.053 rows=66249 loops=1)" " buffers: shared hit=1681" " -> index scan using idx_addresses_accommodation_id on addresses (cost=0.29..3370.89 rows=66250 width=114) (actual time=0.008..19.648 rows=66250 loops=1)" " buffers: shared hit=7486" "total runtime: 108.642 ms"
if literally won't stop running, might want check if there's hanged-up session using table.
we had issue when using python's psycopg2
. 1 time there's query execution fail next query (no matter source of query) table never homecoming result. restarting postgres
service solve , adding rollback
clause on query failure in python prevent happening again.
sql performance postgresql
No comments:
Post a Comment