Skip to content

Commit

Permalink
Merge main into doc-8.y for 8.0.1 release
Browse files Browse the repository at this point in the history
  • Loading branch information
rbrw committed Jun 14, 2023
2 parents 7d1b415 + f31bef3 commit 3dabe63
Show file tree
Hide file tree
Showing 16 changed files with 2,061 additions and 260 deletions.
4 changes: 4 additions & 0 deletions documentation/configure.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,10 @@ the `query-timeout-default` description for additional information.
At the moment, this limit only applies to the `/pdb//query/..`
endpoints.

Note that this maximum does not apply to PuppetDB sync (PE Only)
queries (with `origin=puppet:puppetdb-sync-*`). They specify their
own timeouts related to the sync `entity-time-limit`.

### `certificate-allowlist`

Optional. This describes the path to a file that contains a list of
Expand Down
2 changes: 1 addition & 1 deletion documentation/puppetdb.ditamap
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
<prodinfo>
<prodname>puppetdb</prodname>
<vrmlist>
<vrm version="7"/>
<vrm version="8"/>
</vrmlist>
</prodinfo>
</topicmeta>
Expand Down
13 changes: 8 additions & 5 deletions documentation/upgrade.markdown
Original file line number Diff line number Diff line change
Expand Up @@ -26,15 +26,18 @@ of date.

If you are not planning to change your underlying PuppetDB database
configuration prior to upgrading, you don't need to worry about migrating your
existing data: PuppetDB will handle this automatically. However, if you plan to
switch to a different database, you should export your existing data prior to
changing your database configuration, but you must use PuppetDB 3.x to do so.
Please consult the [Migrating Data][puppetdb3] for more information.
existing data: PuppetDB will handle this automatically. If you are upgrading
your Postgres between minor versions, no changes are needed as well. To upgrade
your PostgreSQL database from one major version to another consult the
[PostgreSQL upgrade
docs](https://www.postgresql.org/docs/current/upgrading.html) for information
on your options.

## Upgrading with the PuppetDB module

If you [installed PuppetDB with the module][module], you only need to do the
following to upgrade:
following to upgrade between major versions of PuppetDB. The module does not
automate major version upgrades of the PostgreSQL database.

1. Make sure that the Puppet Server has an updated version of the
[puppetlabs-puppetdb](https://forge.puppetlabs.com/puppetlabs/puppetdb)
Expand Down
145 changes: 145 additions & 0 deletions ext/bin/analyze-index-usage
Original file line number Diff line number Diff line change
@@ -0,0 +1,145 @@
#!/usr/bin/env ruby
#
# intended to do some basic analysis of the index usage in a given
# Puppet Enterprise Support Script

def usage
<<~USAGETXT
Usage:
analyze-index-usage DATABASE_NAME SUPPORT_SCRIPT_DIR
USAGETXT
end

def helptext
printf usage

exit 0
end

def misuse
STDERR.printf usage

exit 2
end

helptext if ARGV.any? { |arg| arg == "-h" || arg == "--help" }

misuse unless ARGV.length == 2

db_name = ARGV[0]
support_script = ARGV[1]

# Check for existence of the db stats file
db_stats_file = File.join(support_script, "enterprise/postgres_db_stats.txt")
unless File.exist? db_stats_file
STDERR.puts "File does not exist #{db_stats_file}"
exit 2
end

def move_enum_to_db_table(enum, db_name, table_name)
# Find database section
loop do
break if /^#{db_name}$/ =~ enum.next
end

# Find table inside the database section
loop do
break if /^#{table_name}/ =~ enum.next
end
end

def parse_table_columns(enum)
# Parse the table column header
names = enum.next.split('|').map { |l| l.strip }

enum.next

names
end

def parse_table_rows(enum, column_names)
rows = []
loop do
l = enum.next

# tables end with (# rows)
break if /^\(/ =~ l

row = l.split('|').map { |l| l.strip }

h = {}
row.each_index do |i|
h[column_names[i]] = row[i]
end
rows << h
end

rows
end

# Parse the information about table writes
enum = File.new(db_stats_file).each
move_enum_to_db_table(enum, db_name, 'pg_stat_user_tables')
column_names = parse_table_columns(enum)

# Parse the rows, one for each table
tables = parse_table_rows(enum, column_names)

# convert table array to hash so we can look up each table by name when we
# print the information for each index below
table_hash = tables.each_with_object({}) do |v, table_hash|
name = v["relname"]
table_hash[name] = v
end

# I don't know if the table ordering is stable, so get a new enumerator
# before looking for the index usage statistics
enum = File.new(db_stats_file).each
move_enum_to_db_table(enum, db_name, 'pg_stat_user_indexes')
column_names = parse_table_columns(enum)

# Parse the rows, one for each index
indexes = parse_table_rows(enum, column_names)

puts '%-30s | %-60s | %-12s | %-12s | %-13s | %-13s | %-12s |' % ["tablename", "index_name", "idx_scan", "idx_tup_read", "idx_tup_fetch", 'table_updates', 'total_tup']
puts '----------------------------------------------------------------------------------------------------------------------------------------------------------------------------'

# Ruby doesn't promise to be a stable sort, but it appears to be. This does a
# series of sorts to produce a table that's hopefully readable (where the least
# used indexes are at the bottom). The order is fairly arbitrary.
indexes.sort_by do |v|
v["relname"]
end.sort_by do |v|
v["idx_tup_fetch"].to_i
end.sort_by do |v|
v["idx_tup_read"].to_i
end.sort_by do |v|
v["idx_scan"].to_i
end.reverse.each do |v|
table = v['relname']
ts = table_hash[table]

# updates intends to quantify the rough write load for each index. Whenever a
# row is written to, the index will need to be updated, so this sums up all
# the writes to the table for each index. hot updates do not create a dead row
# and may not update the primary key (or its index), but they will need to update
# some other set of indexes.
#
# Not every update will update every index so this is just an approximation
#
# Q: does delete actually write to index?
updates = ts['n_tup_ins'].to_i + ts['n_tup_upd'].to_i + ts['n_tup_del'].to_i + ts['n_tup_hot_upd'].to_i

# For each index, the total_tup number and the size of the datatype(s) being
# indexed should be roughly the size of the index (plus some inevitable
# overhead). This may also vary for indexes that can reduce size by
# compression/deduplication
#
# This number is only an estimate. Dead tuples are still represented in
# indexes so they are included in the total. Even tuples that have been
# reclaimed by vacuum may still be represented in an index if it doesn't have
# many tuples to clean up, so this number is only an estimate.
total_tup = ts['n_live_tup'].to_i + ts['n_dead_tup'].to_i

puts '%-30s | %-60s | %-12s | %-12s | %-13s | %-13s | %-12s |' % [table, v['indexrelname'], v['idx_scan'], v['idx_tup_read'], v['idx_tup_fetch'], updates, total_tup]
end
40 changes: 39 additions & 1 deletion locales/puppetdb.pot
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
msgid ""
msgstr ""
"Project-Id-Version: puppetlabs.puppetdb \n"
"X-Git-Ref: 6026857825dc4945d53f3021167adb87a7c0a8d8\n"
"X-Git-Ref: 0c4f316ecc084e1babf24b2b9bf0d0f86ce426f1\n"
"Report-Msgid-Bugs-To: [email protected]\n"
"POT-Creation-Date: \n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
Expand Down Expand Up @@ -131,10 +131,20 @@ msgstr ""
msgid "Press ctrl-c to stop"
msgstr ""

#: src/puppetlabs/puppetdb/cli/generate.clj
msgid ""
"Warning: the weight of the baseline factset adjusted to {0} facts is already "
"{1} bytes which is greater than the requested total size of {2} bytes."
msgstr ""

#: src/puppetlabs/puppetdb/cli/generate.clj
msgid "Error: output path does not exist: {0}"
msgstr ""

#: src/puppetlabs/puppetdb/cli/generate.clj
msgid "Error: the sum of -i and -l must be less than or equal to 100%"
msgstr ""

#: src/puppetlabs/puppetdb/cli/services.clj
msgid "Auto-expired node {0}"
msgstr ""
Expand Down Expand Up @@ -531,6 +541,10 @@ msgstr ""
msgid "gc-interval cannot be negative: {0}"
msgstr ""

#: src/puppetlabs/puppetdb/config.clj
msgid "Configured {0} timeout must be non-negative number, not {1}"
msgstr ""

#: src/puppetlabs/puppetdb/config.clj
msgid "Required setting ''vardir'' is not specified."
msgstr ""
Expand Down Expand Up @@ -804,6 +818,10 @@ msgstr ""
msgid "more than one item returned for singular query"
msgstr ""

#: src/puppetlabs/puppetdb/http/query.clj
msgid "Query timeout must be non-negative number, not {0}"
msgstr ""

#: src/puppetlabs/puppetdb/http/query.clj
#: src/puppetlabs/puppetdb/middleware.clj
msgid "Missing required query parameter ''{0}''"
Expand Down Expand Up @@ -831,6 +849,14 @@ msgstr ""
msgid "PuppetDB queries must be made via GET/POST"
msgstr ""

#: src/puppetlabs/puppetdb/http/query.clj
msgid "Query {0} from {1} exceeded timeout"
msgstr ""

#: src/puppetlabs/puppetdb/http/query.clj
msgid "Query {0} exceeded timeout"
msgstr ""

#: src/puppetlabs/puppetdb/http/query.clj
msgid ""
"query parameters ''distinct_start_time'' and ''distinct_end_time'' must be "
Expand Down Expand Up @@ -1165,6 +1191,14 @@ msgid ""
"''{1}''"
msgstr ""

#: src/puppetlabs/puppetdb/query_eng.clj
msgid "PDBQuery:{0}: from {1} exceeded timeout"
msgstr ""

#: src/puppetlabs/puppetdb/query_eng.clj
msgid "PDBQuery:{0}: exceeded timeout"
msgstr ""

#: src/puppetlabs/puppetdb/query_eng.clj
msgid "Invalid entity ''{0}'' in query"
msgstr ""
Expand All @@ -1183,6 +1217,10 @@ msgstr ""
msgid "Unable to stream response: {0}"
msgstr ""

#: src/puppetlabs/puppetdb/query_eng.clj
msgid "Impossible situation: query streamer exiting without delivery"
msgstr ""

#: src/puppetlabs/puppetdb/query_eng.clj
msgid "Query streaming failed: {0} {1}"
msgstr ""
Expand Down
4 changes: 2 additions & 2 deletions project.clj
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
(def pdb-version "8.0.1-SNAPSHOT")
(def pdb-version "8.0.1")

(def clj-parent-version "5.3.5")
(def clj-parent-version "6.0.0")

(defn true-in-env? [x]
(#{"true" "yes" "1"} (System/getenv x)))
Expand Down
Loading

0 comments on commit 3dabe63

Please sign in to comment.