ImageMetaTag.ImageDict¶
-This submodule contains the ImageMetaTag.ImageDict
class, and functions for
+
This submodule contains the ImageMetaTag.ImageDict class, and functions for preparing instances of it.
-The purpose of an ImageMetaTag.ImageDict
is to sort the image metadata, supplied to
-ImageMetaTag.savefig()
and usually stored in a database file, into a useful form that
-can quickly and easily be presented as a webpage by ImageMetaTag.webpage.write_full_page()
.
The purpose of an ImageMetaTag.ImageDict is to sort the image metadata, supplied to +ImageMetaTag.savefig() and usually stored in a database file, into a useful form that +can quickly and easily be presented as a webpage by ImageMetaTag.webpage.write_full_page().
An easy example of creating a webpage, using an ImageDict is shown in simplest_image_dict.py
(C) Crown copyright Met Office. All rights reserved. @@ -108,7 +107,7 @@
Quick search
The ImageDict Class¶
-
-class
ImageMetaTag.
ImageDict
(input_dict, level_names=None, selector_widths=None, selector_animated=None, animation_direction=None)[source]¶
+class ImageMetaTag.ImageDict(input_dict, level_names=None, selector_widths=None, selector_animated=None, animation_direction=None)¶
A class which holds a heirachical dictionary of dictionaries, and the associated methods for appending/removing dictionaries from it.
The expected use case for the dictionary is to represent a large set of images, @@ -117,7 +116,7 @@
The ImageDict Class
ImageMetaTag.dict_heirachy_from_list()
+of metadata items, use ImageMetaTag.dict_heirachy_from_list()- Options:
-
@@ -140,33 +139,33 @@
-
-
append
(new_dict, devmode=False, skip_key_relist=False)[source]¶
+append(new_dict, devmode=False, skip_key_relist=False)¶
appends a new dictionary (with a single element in each layer!) into a current ImageDict.
The skip_key_relist option can be set to True to stop the regeneration of key lists.
The ImageDict Class
-
-
-
-
copy_except_dict_and_keys
()[source]¶
+copy_except_dict_and_keys()¶
returns a copy of an ImageDict except it will have null values for the dict and keys
-
-
dict_depth
(uniform_depth=False)[source]¶
+dict_depth(uniform_depth=False)¶
Uses dict_depths to get the depth of all branches of the plot_dict and, if required, checks they all equal the max
-
-
dict_depths
(in_dict, depth=0)[source]¶
+dict_depths(in_dict, depth=0)¶
Recursively finds the depth of a ImageDict and returns a list of lists
-
-
dict_index_array
(devmode=False, maxdepth=None, verbose=False)[source]¶
+dict_index_array(devmode=False, maxdepth=None, verbose=False)¶
Using the list of dictionary keys (at each level of a uniform_depth dictionary of dictionaries), this produces a list of the indices that can be used to reference the keys to get the result for each element.
@@ -181,13 +180,13 @@The ImageDict Class
- -
+dict_print(in_dict, indent=0, outstr='')¶dict_print
(in_dict, indent=0, outstr='')[source]¶recursively details a dictionary of dictionaries, with indentation, to a string
- -
-
-
dict_prune
(in_dict, dicts_pruned=False)[source]¶
+dict_prune(in_dict, dicts_pruned=False)¶
Prunes the ImageDict of empty, unterminated, branches (which occur after parts have been removed). Returns True if a dict was pruned, False if not.
@@ -195,7 +194,7 @@The ImageDict Class
- -
+dict_remove(in_dict, rm_dict)¶dict_remove
(in_dict, rm_dict)[source]¶removes a dictionary of dictionaries from another, larger, one. This can leave empty branches, at multiple levels of the dict, so needs cleaning up afterwards.
@@ -203,25 +202,25 @@The ImageDict Class
- -
+dict_union(in_dict, new_dict)¶dict_union
(in_dict, new_dict)[source]¶produces the union of a dictionary of dictionaries
- -
-
-
key_at_depth
(in_dict, depth)[source]¶
+key_at_depth(in_dict, depth)¶
returns the keys of a dictionary, at a given depth
-
-
keys_by_depth
(in_dict, depth=0, keys=None, subdirs=None)[source]¶
+keys_by_depth(in_dict, depth=0, keys=None, subdirs=None)¶
Returns:
- a dictionary of sets, containing the keys at each level of the dictionary (keyed by the level number). @@ -231,7 +230,7 @@
-
-
list_keys_by_depth
(devmode=False)[source]¶
+list_keys_by_depth(devmode=False)¶
Lists the keys of the dictionary to create a list of keys, for each level of the dictionary, up to its depth.
It is usually much faster to create an ImageDict by appending images to it, @@ -245,7 +244,7 @@
The ImageDict Class
- -
+mergedicts(dict1, dict2)¶mergedicts
(dict1, dict2)[source]¶Alternative version of dict_union using generators which is much faster for large dicts but needs to be converted to a dict when it’s called: new_dict = dict(mergdicts(dict1,dict))
@@ -253,7 +252,7 @@The ImageDict Class
- -
+remove(rm_dict, skip_key_relist=False)¶remove
(rm_dict, skip_key_relist=False)[source]¶removes a dictionary from within an ImageDict. The skip_key_relist option can be set to True to stop the regeneration of key lists.
TIP: Because the remove process needs to prune empty sections afterwards, @@ -264,7 +263,7 @@
The ImageDict Class
- -
+return_from_list(vals_at_depth)¶return_from_list
(vals_at_depth)[source]¶Returns the end values of ImageDict, when given a list of values for the keys at different depths. Returns None if the set of values is not contained in the ImageDict.
@@ -275,7 +274,7 @@The ImageDict Class
- -
+return_key_inds(in_dict, out_array=None, this_set_of_inds=None, depth=None, level=None, verbose=False, devmode=False)¶return_key_inds
(in_dict, out_array=None, this_set_of_inds=None, depth=None, level=None, verbose=False, devmode=False)[source]¶Does the work for dict_index_array, by recursively adding indices to the keys to a current list, and branching where required, and adding compelted lists to the out_array
@@ -283,7 +282,7 @@The ImageDict Class
- -
+sort_keys(sort_methods, devmode=False)¶sort_keys
(sort_methods, devmode=False)[source]¶Sorts the keys of a plot dictionary, according to a particular sort method (or a list of sort methods that matches the number of keys).
-
@@ -307,18 +306,18 @@
-
-
ImageMetaTag.
readmeta_from_image
(img_file, img_format=None)[source]¶
+ImageMetaTag.readmeta_from_image(img_file, img_format=None)¶
Reads the metadata added by the ImageMetaTag savefig, from an image file, and returns a dictionary of tagname: value pairs
-
-
ImageMetaTag.
dict_heirachy_from_list
(in_dict, payload, heirachy)[source]¶
+ImageMetaTag.dict_heirachy_from_list(in_dict, payload, heirachy)¶
Converts a flat dictionary of tagname: value pairs, into an ordered dictionary of dictionaries according to the input heirachy (which is a list of tagnames).
The output dictionary will only have one element per level, but can be used to create or -append into an
@@ -327,7 +326,7 @@ImageMetaTag.ImageDict
. +append into an ImageMetaTag.ImageDict. The final level will be the ‘payload’ input, which is the object the dictionary, with all it’s levels, is describing. The payload would usually be the full/relative path of the image file, or list of image files.Functions useful in preparing ImageDicts
- -
+ImageMetaTag.dict_split(in_dict, n_split=None, size_split=None, extra_opts=None)¶ImageMetaTag.
dict_split
(in_dict, n_split=None, size_split=None, extra_opts=None)[source]¶Generator that breaks up a flat dictionary and yields a set of sub-dictionaries in n_split chunks, or size_split in size. It is split on it’s first level, not recursively.
It is very useful for splitting large dictionaries of image metadata @@ -348,7 +347,7 @@
Functions useful in preparing ImageDicts
- -
+ImageMetaTag.simple_dict_filter(simple_dict, tests, raise_key_mismatch=False)¶ImageMetaTag.
simple_dict_filter
(simple_dict, tests, raise_key_mismatch=False)[source]¶Tests the contents of a simple, un-heirachical dict (properties an image) against a set of tests.
An example set of tests: @@ -380,7 +379,7 @@
Functions useful in preparing ImageDicts
- -
+ImageMetaTag.check_for_required_keys(img_info, req_keys)¶ImageMetaTag.
check_for_required_keys
(img_info, req_keys)[source]¶Checks an img_info dictionary has a set of required keys, specifed as a list of strings
Returns True or False accordingly.
- -
The ImageDict Class¶
Functions useful in preparing ImageDicts
- -
The ImageDict Class
Navigation
-
@@ -409,12 +408,12 @@
Navigation
- previous | -
- ImageMetaTag 0.7.3 documentation » +
- ImageMetaTag 0.7.3 documentation »
Navigation
-
@@ -38,23 +36,27 @@
Navigation
- modules | -
- ImageMetaTag 0.7.3 documentation » -
- Module code » +
- ImageMetaTag 0.7.3 documentation » +
- Module code » +
- ImageMetaTag »
Quick search
++ Enter search terms or a module, class or function name. +
Quick search
Source code for ImageMetaTag.db
-'''
+'''
This module contains a set of functions to create/write to/read
and maintain an sqlite3 database of image files and their associated metadata.
@@ -80,27 +82,27 @@ Source code for ImageMetaTag.db
import os, sqlite3, fnmatch, time, errno, pdb
-from ImageMetaTag import META_IMG_FORMATS, DEFAULT_DB_TIMEOUT, DEFAULT_DB_ATTEMPTS
-from ImageMetaTag.img_dict import readmeta_from_image, check_for_required_keys
+from ImageMetaTag import META_IMG_FORMATS, DEFAULT_DB_TIMEOUT, DEFAULT_DB_ATTEMPTS
+from ImageMetaTag.img_dict import readmeta_from_image, check_for_required_keys
-from datetime import datetime
-import numpy as np
-from io import StringIO
+from datetime import datetime
+import numpy as np
+from io import StringIO
-# the name of the database table that holds the sqlite database of plot metadata
-SQLITE_IMG_INFO_TABLE = 'img_info'
+# the name of the database table that holds the sqlite database of plot metadata
+SQLITE_IMG_INFO_TABLE = 'img_info'
[docs]def info_key_to_db_name(in_str):
- 'Consistently convert a name in the img_info dict to something to be used in the database'
- return in_str.replace(' ', '__')
-
+ 'Consistently convert a name in the img_info dict to something to be used in the database'
+ return in_str.replace(' ', '__')
+
[docs]def db_name_to_info_key(in_str):
- 'Inverse of info_key_to_db_name'
- # convert to string, to remove unicode string
- return str(in_str).replace('__', ' ')
-
-[docs]def write_img_to_dbfile(db_file, img_filename, img_info, add_strict=False,
- attempt_replace=False,
+ 'Inverse of info_key_to_db_name'
+ # convert to string, to remove unicode string
+ return str(in_str).replace('__', ' ')
+
+[docs]def write_img_to_dbfile(db_file, img_filename, img_info, add_strict=False,
+ attempt_replace=False,
timeout=DEFAULT_DB_TIMEOUT):
'''
Writes image metadata to a database.
@@ -124,20 +126,20 @@ Source code for ImageMetaTag.db
'''
if len(img_info) == 0:
- raise ValueError('Size of image info dict is zero')
- if db_file is None:
+ raise ValueError('Size of image info dict is zero')
+ if db_file is None:
pass
else:
- # open the database:
+ # open the database:
dbcn, dbcr = open_or_create_db_file(db_file, img_info, timeout=timeout)
- # now write:
+ # now write:
write_img_to_open_db(dbcr, img_filename, img_info,
add_strict=add_strict, attempt_replace=attempt_replace)
- # now commit that databasde entry and close:
+ # now commit that databasde entry and close:
dbcn.commit()
- dbcn.close()
-
-
+[docs]def read(db_file, required_tags=None, tag_strings=None,
db_timeout=DEFAULT_DB_TIMEOUT,
db_attempts=DEFAULT_DB_ATTEMPTS):
'''
@@ -166,52 +168,52 @@ Source code for ImageMetaTag.db
In older versions, this was named read_img_info_from_dbfile which will still work.
'''
- if db_file is None:
- return None, None
+ if db_file is None:
+ return None, None
else:
if not os.path.isfile(db_file):
- return None, None
+ return None, None
else:
n_tries = 1
- read_db = False
+ read_db = False
while not read_db and n_tries <= db_attempts:
try:
- # open the connection and the cursor:
+ # open the connection and the cursor:
dbcn, dbcr = open_db_file(db_file, timeout=db_timeout)
- # read it:
+ # read it:
f_list, out_dict = read_img_info_from_dbcursor(dbcr,
required_tags=required_tags,
tag_strings=tag_strings)
- # close connection:
+ # close connection:
dbcn.close()
- read_db = True
+ read_db = True
except sqlite3.OperationalError as op_err:
- if 'database is locked' in op_err.message:
- # database being locked is what the retries and timeouts are for:
- print('%s database timeout reading from file "%s", %s s' \
+ if 'database is locked' in op_err.message:
+ # database being locked is what the retries and timeouts are for:
+ print('%s database timeout reading from file "%s", %s s' \
% (dt_now_str(), db_file, n_tries * db_timeout))
n_tries += 1
- elif op_err.message == 'no such table: {}'.format(SQLITE_IMG_INFO_TABLE):
- # the db file exists, but it doesn't have anything in it:
- return None, None
+ elif op_err.message == 'no such table: {}'.format(SQLITE_IMG_INFO_TABLE):
+ # the db file exists, but it doesn't have anything in it:
+ return None, None
else:
- # everything else needs to be reported and raised immediately:
- msg = '{} for file {}'.format(op_err.message, db_file)
+ # everything else needs to be reported and raised immediately:
+ msg = '{} for file {}'.format(op_err.message, db_file)
raise sqlite3.OperationalError(msg)
- # if we went through all the attempts then it is time to raise the error:
+ # if we went through all the attempts then it is time to raise the error:
if n_tries > db_attempts:
- msg = '{} for file {}'.format(op_err.message, db_file)
+ msg = '{} for file {}'.format(op_err.message, db_file)
raise sqlite3.OperationalError(msg)
- # close connection:
+ # close connection:
dbcn.close()
- return f_list, out_dict
-
+ return f_list, out_dict
+
read_img_info_from_dbfile = read
-[docs]def merge_db_files(main_db_file, add_db_file, delete_add_db=False,
- delete_added_entries=False, attempt_replace=False,
+[docs]def merge_db_files(main_db_file, add_db_file, delete_add_db=False,
+ delete_added_entries=False, attempt_replace=False,
db_timeout=DEFAULT_DB_TIMEOUT, db_attempts=DEFAULT_DB_ATTEMPTS):
'''
Merges two ImageMetaTag database files, with the contents of add_db_file added
@@ -228,49 +230,49 @@ Source code for ImageMetaTag.db
databases. It does nothing if delete_add_db is True.
'''
- # read what we want to add in:
+ # read what we want to add in:
add_filelist, add_tags = read(add_db_file, db_timeout=db_timeout, db_attempts=db_attempts)
- if add_filelist is not None:
+ if add_filelist is not None:
if len(add_filelist) > 0:
n_tries = 1
- wrote_db = False
+ wrote_db = False
while not wrote_db and n_tries <= db_attempts:
try:
- # open the main database
+ # open the main database
dbcn, dbcr = open_db_file(main_db_file, timeout=db_timeout)
- # and add in the new contents:
+ # and add in the new contents:
for add_file, add_info in add_tags.items():
write_img_to_open_db(dbcr, add_file, add_info,
attempt_replace=attempt_replace)
dbcn.commit()
- # if we got here, then we're good!
- wrote_db = True
- # finally close:
+ # if we got here, then we're good!
+ wrote_db = True
+ # finally close:
dbcn.close()
except sqlite3.OperationalError as op_err:
- if 'database is locked' in op_err.message:
- # database being locked is what the retries and timeouts are for:
- print('%s database timeout writing to file "%s", %s s' \
+ if 'database is locked' in op_err.message:
+ # database being locked is what the retries and timeouts are for:
+ print('%s database timeout writing to file "%s", %s s' \
% (dt_now_str(), main_db_file, n_tries * db_timeout))
n_tries += 1
else:
- # everything else needs to be reported and raised immediately:
- msg = '{} for file {}'.format(op_err.message, main_db_file)
+ # everything else needs to be reported and raised immediately:
+ msg = '{} for file {}'.format(op_err.message, main_db_file)
raise sqlite3.OperationalError(msg)
- # if we went through all the attempts then it is time to raise the error:
+ # if we went through all the attempts then it is time to raise the error:
if n_tries > db_attempts:
- msg = '{} for file {}'.format(op_err.message, main_db_file)
+ msg = '{} for file {}'.format(op_err.message, main_db_file)
raise sqlite3.OperationalError(msg)
- # delete or tidy:
+ # delete or tidy:
if delete_add_db:
rmfile(add_db_file)
elif delete_added_entries:
- del_plots_from_dbfile(add_db_file, add_filelist, do_vacuum=False,
- allow_retries=True, skip_warning=True)
-
-[docs]def open_or_create_db_file(db_file, img_info, restart_db=False, timeout=DEFAULT_DB_TIMEOUT):
+ del_plots_from_dbfile(add_db_file, add_filelist, do_vacuum=False,
+ allow_retries=True, skip_warning=True)
+
+[docs]def open_or_create_db_file(db_file, img_info, restart_db=False, timeout=DEFAULT_DB_TIMEOUT):
'''
Opens a database file and sets up initial tables, then returns the connection and cursor.
@@ -288,44 +290,44 @@ Source code for ImageMetaTag.db
if not os.path.isfile(db_file) or restart_db:
if os.path.isfile(db_file):
os.remove(db_file)
- # create a new database file:
+ # create a new database file:
dbcn = sqlite3.connect(db_file)
dbcr = dbcn.cursor()
- # and create the table:
+ # and create the table:
create_table_for_img_info(dbcr, img_info)
else:
- # open the database file:
+ # open the database file:
dbcn, dbcr = open_db_file(db_file, timeout=timeout)
- # check for the required table:
+ # check for the required table:
table_names = list_tables(dbcr)
if SQLITE_IMG_INFO_TABLE not in table_names:
- # create it if required:
+ # create it if required:
create_table_for_img_info(dbcr, img_info)
- return dbcn, dbcr
-
+ return dbcn, dbcr
+
def create_table_for_img_info(dbcr, img_info):
- 'Creates a database table, in a database cursor, to store for the input img_info'
+ 'Creates a database table, in a database cursor, to store for the input img_info'
- create_command = 'CREATE TABLE {}(fname TEXT PRIMARY KEY,'.format(SQLITE_IMG_INFO_TABLE)
+ create_command = 'CREATE TABLE {}(fname TEXT PRIMARY KEY,'.format(SQLITE_IMG_INFO_TABLE)
for key in list(img_info.keys()):
- create_command += ' "{}" TEXT,'.format(info_key_to_db_name(key))
- create_command = create_command[0:-1] + ')'
- # Can make a rare race condition if multiple processes try to create the file at the same time;
- # If that happens, the error is:
- # sqlite3.OperationalError: table img_info already exists for file .......
+ create_command += ' "{}" TEXT,'.format(info_key_to_db_name(key))
+ create_command = create_command[0:-1] + ')'
+ # Can make a rare race condition if multiple processes try to create the file at the same time;
+ # If that happens, the error is:
+ # sqlite3.OperationalError: table img_info already exists for file .......
try:
dbcr.execute(create_command)
except sqlite3.OperationalError as op_err:
- if 'table {} already exists'.format(SQLITE_IMG_INFO_TABLE) in op_err.message:
- # another process has just created the table, so sleep(1)
- # This is only when a db file is created so isn't called often
- # (and the race condition is rare!)
+ if 'table {} already exists'.format(SQLITE_IMG_INFO_TABLE) in op_err.message:
+ # another process has just created the table, so sleep(1)
+ # This is only when a db file is created so isn't called often
+ # (and the race condition is rare!)
time.sleep(1)
else:
- # everything else needs to be reported and raised immediately:
+ # everything else needs to be reported and raised immediately:
raise sqlite3.OperationalError(op_err.message)
except sqlite3.Error as sq_err:
- # everything else needs to be reported and raised immediately:
+ # everything else needs to be reported and raised immediately:
raise sqlite3.Error(sq_err.message)
@@ -339,8 +341,8 @@ Source code for ImageMetaTag.db
dbcn = sqlite3.connect(db_file, timeout=timeout)
dbcr = dbcn.cursor()
- return dbcn, dbcr
-
+ return dbcn, dbcr
+
[docs]def read_db_file_to_mem(db_file, timeout=DEFAULT_DB_TIMEOUT):
'''
Opens a pre-existing database file into a copy held in memory. This can be accessed much
@@ -355,24 +357,24 @@ Source code for ImageMetaTag.db
Returns an open database connection (dbcn) and cursor (dbcr)
'''
- # read the database into an in-memory file object:
- #dbcn = sqlite3.connect(db_file)
+ # read the database into an in-memory file object:
+ #dbcn = sqlite3.connect(db_file)
dbcn, _ = open_db_file(db_file, timeout=timeout)
memfile = StringIO()
for line in dbcn.iterdump():
- memfile.write(u'{}\n'.format(line))
+ memfile.write(u'{}\n'.format(line))
dbcn.close()
memfile.seek(0)
- # Create a database in memory and import from memfile
- dbcn = sqlite3.connect(":memory:")
+ # Create a database in memory and import from memfile
+ dbcn = sqlite3.connect(":memory:")
dbcn.cursor().executescript(memfile.read())
dbcn.commit()
dbcr = dbcn.cursor()
- return dbcn, dbcr
-
-[docs]def write_img_to_open_db(dbcr, filename, img_info, add_strict=False, attempt_replace=False):
+ return dbcn, dbcr
+
+[docs]def write_img_to_open_db(dbcr, filename, img_info, add_strict=False, attempt_replace=False):
'''
Does the work for write_img_to_dbfile to add an image to the open database cursor (dbcr)
@@ -382,44 +384,44 @@ Source code for ImageMetaTag.db
entry if the image is already present. Otherwise it will ignore it.
'''
- # now add in the information:
- # get the name of the fields from the cursor descripton:
- _ = dbcr.execute('select * from %s' % SQLITE_IMG_INFO_TABLE).fetchone()
+ # now add in the information:
+ # get the name of the fields from the cursor descripton:
+ _ = dbcr.execute('select * from %s' % SQLITE_IMG_INFO_TABLE).fetchone()
field_names = [r[0] for r in dbcr.description]
- # convert these to keys:
+ # convert these to keys:
field_names = [db_name_to_info_key(x) for x in field_names]
- # now build the command
- add_command = 'INSERT INTO {}(fname,'.format(SQLITE_IMG_INFO_TABLE)
+ # now build the command
+ add_command = 'INSERT INTO {}(fname,'.format(SQLITE_IMG_INFO_TABLE)
add_list = [filename]
for key, item in img_info.items():
if key in field_names:
- add_command += ' "{}",'.format(info_key_to_db_name(key))
+ add_command += ' "{}",'.format(info_key_to_db_name(key))
add_list.append(item)
elif add_strict:
- raise ValueError('Attempting to add a line to the database that include invalid fields')
- # add in the right number of ?
- add_command = add_command[0:-1] + ') VALUES(' + '?,'*(len(add_list)-1) + '?)'
+ raise ValueError('Attempting to add a line to the database that include invalid fields')
+ # add in the right number of ?
+ add_command = add_command[0:-1] + ') VALUES(' + '?,'*(len(add_list)-1) + '?)'
try:
dbcr.execute(add_command, add_list)
except sqlite3.IntegrityError:
if attempt_replace:
- # try an INSERT OR REPLACE
- add_repl_command = add_command.replace('INSERT ', 'INSERT OR REPLACE ')
- # if this fails, want it to report it's error message as is, so no 'try':
+ # try an INSERT OR REPLACE
+ add_repl_command = add_command.replace('INSERT ', 'INSERT OR REPLACE ')
+ # if this fails, want it to report it's error message as is, so no 'try':
dbcr.execute(add_repl_command, add_list)
else:
- # this file is already in the database (as the primary key, so do nothing...)
+ # this file is already in the database (as the primary key, so do nothing...)
pass
finally:
- pass
-
+ pass
+
def list_tables(dbcr):
- 'lists the tables present, from a database cursor'
- result = dbcr.execute("SELECT name FROM sqlite_master WHERE type='table';").fetchall()
+ 'lists the tables present, from a database cursor'
+ result = dbcr.execute("SELECT name FROM sqlite_master WHERE type='table';").fetchall()
table_names = sorted([x[0] for x in zip(*result)])
return table_names
-[docs]def read_img_info_from_dbcursor(dbcr, required_tags=None, tag_strings=None):
+[docs]def read_img_info_from_dbcursor(dbcr, required_tags=None, tag_strings=None):
'''
Reads from an open database cursor (dbcr) for :func:`ImageMetaTag.db.read` and other routines.
@@ -427,15 +429,15 @@ Source code for ImageMetaTag.db
* required_tags - a list of image tags to return, and to fail if not all are present
* tag_strings - an input list that will be populated with the unique values of the image tags
'''
- # read in the data from the database:
- db_contents = dbcr.execute('select * from %s' % SQLITE_IMG_INFO_TABLE).fetchall()
- # and convert that to a useful dict/list combo:
+ # read in the data from the database:
+ db_contents = dbcr.execute('select * from %s' % SQLITE_IMG_INFO_TABLE).fetchall()
+ # and convert that to a useful dict/list combo:
filename_list, out_dict = process_select_star_from(db_contents, dbcr,
required_tags=required_tags,
tag_strings=tag_strings)
- return filename_list, out_dict
-
-[docs]def process_select_star_from(db_contents, dbcr, required_tags=None, tag_strings=None):
+ return filename_list, out_dict
+
+[docs]def process_select_star_from(db_contents, dbcr, required_tags=None, tag_strings=None):
'''
Converts the output from a select * from .... command into a standard output format
Requires a database cursor (dbcr) to identify the field names.
@@ -450,28 +452,28 @@ Source code for ImageMetaTag.db
* a dictionary, by filename, containing a dictionary of the image metadata \
as tagname: value
'''
- # get the name of the fields from the cursor descripton:
+ # get the name of the fields from the cursor descripton:
out_dict = {}
filename_list = []
- # get the name of the fields from the cursor descripton:
+ # get the name of the fields from the cursor descripton:
field_names = [r[0] for r in dbcr.description]
- # the required_tags input is a list of tag names (as strings):
- if required_tags is not None:
+ # the required_tags input is a list of tag names (as strings):
+ if required_tags is not None:
if not isinstance(required_tags, list):
- raise ValueError('Input required_tags should be a list of strings')
+ raise ValueError('Input required_tags should be a list of strings')
else:
for test_str in required_tags:
if not isinstance(test_str, str):
- raise ValueError('Input required_tags should be a list of strings')
+ raise ValueError('Input required_tags should be a list of strings')
- if tag_strings is not None:
+ if tag_strings is not None:
if not isinstance(tag_strings, list):
- raise ValueError('Input tag_strings should be a list')
+ raise ValueError('Input tag_strings should be a list')
- # now iterate and make a dictionary to return,
- # with the tests outside the loops so they're not tested for every row and element:
- if required_tags is None and tag_strings is None:
+ # now iterate and make a dictionary to return,
+ # with the tests outside the loops so they're not tested for every row and element:
+ if required_tags is None and tag_strings is None:
for row in db_contents:
fname = str(row[0])
filename_list.append(fname)
@@ -479,10 +481,10 @@ Source code for ImageMetaTag.db
for tag_name, tag_val in zip(field_names[1:], row[1:]):
img_info[db_name_to_info_key(tag_name)] = str(tag_val)
out_dict[fname] = img_info
- # return None, None if the contents are empty:
+ # return None, None if the contents are empty:
if len(filename_list) == 0 and len(out_dict) == 0:
- return None, None
- elif required_tags is not None and tag_strings is None:
+ return None, None
+ elif required_tags is not None and tag_strings is None:
for row in db_contents:
fname = str(row[0])
filename_list.append(fname)
@@ -492,14 +494,14 @@ Source code for ImageMetaTag.db
if tag_name_full in required_tags:
img_info[tag_name_full] = str(tag_val)
if len(img_info) != len(required_tags):
- raise ValueError('Database entry does not contain all of the required_tags')
+ raise ValueError('Database entry does not contain all of the required_tags')
out_dict[fname] = img_info
- # return None, None if the contents are empty:
+ # return None, None if the contents are empty:
if len(filename_list) == 0 and len(out_dict) == 0:
- return None, None
- elif required_tags is None and tag_strings is not None:
- # we want all tags, but we want them as referneces to a common list:
+ return None, None
+ elif required_tags is None and tag_strings is not None:
+ # we want all tags, but we want them as referneces to a common list:
for row in db_contents:
fname = str(row[0])
filename_list.append(fname)
@@ -507,54 +509,54 @@ Source code for ImageMetaTag.db
for tag_name, tag_val in zip(field_names[1:], row[1:]):
str_tag_val = str(tag_val)
try:
- # loacate the tag_string in the list:
+ # loacate the tag_string in the list:
tag_index = tag_strings.index(str_tag_val)
- # and refernece it:
+ # and refernece it:
img_info[db_name_to_info_key(tag_name)] = tag_strings[tag_index]
except ValueError:
- # tag not yet in the tag_strings list, so
- # add the new string onto the end:
+ # tag not yet in the tag_strings list, so
+ # add the new string onto the end:
tag_strings.append(str_tag_val)
- # and reference it:
+ # and reference it:
img_info[db_name_to_info_key(tag_name)] = tag_strings[-1]
out_dict[fname] = img_info
- # return None, None if the contents are empty:
+ # return None, None if the contents are empty:
if len(filename_list) == 0 and len(out_dict) == 0:
- return None, None
+ return None, None
else:
- # we want to filter the tags, and we want them as referneces to a common list:
+ # we want to filter the tags, and we want them as referneces to a common list:
for row in db_contents:
fname = str(row[0])
filename_list.append(fname)
img_info = {}
for tag_name, tag_val in zip(field_names[1:], row[1:]):
- # test to see if the tag name is required:
+ # test to see if the tag name is required:
tag_name_full = db_name_to_info_key(tag_name)
if tag_name_full in required_tags:
str_tag_val = str(tag_val)
try:
- # loacate the tag_string in the list:
+ # loacate the tag_string in the list:
tag_index = tag_strings.index(str_tag_val)
- # and refernece it:
+ # and refernece it:
img_info[tag_name_full] = tag_strings[tag_index]
except ValueError:
- # tag not yet in the tag_strings list, so
- # add the new string onto the end:
+ # tag not yet in the tag_strings list, so
+ # add the new string onto the end:
tag_strings.append(str_tag_val)
- # and reference it:
+ # and reference it:
img_info[tag_name_full] = tag_strings[-1]
out_dict[fname] = img_info
- # return None, None if the contents are empty:
+ # return None, None if the contents are empty:
if len(filename_list) == 0 and len(out_dict) == 0:
- return None, None
-
+ return None, None
- # we're good, return the data:
- return filename_list, out_dict
-[docs]def del_plots_from_dbfile(db_file, filenames, do_vacuum=True, allow_retries=True,
+ # we're good, return the data:
+ return filename_list, out_dict
+
+[docs]def del_plots_from_dbfile(db_file, filenames, do_vacuum=True, allow_retries=True,
db_timeout=DEFAULT_DB_TIMEOUT, db_attempts=DEFAULT_DB_ATTEMPTS,
- skip_warning=False):
+ skip_warning=False):
'''
deletes a list of files from a database file created by :mod:`ImageMetaTag.db`
@@ -572,104 +574,104 @@ Source code for ImageMetaTag.db
else:
fn_list = filenames
- # delete command to use:
- del_cmd = "DELETE FROM {} WHERE fname=?"
- if db_file is None:
+ # delete command to use:
+ del_cmd = "DELETE FROM {} WHERE fname=?"
+ if db_file is None:
pass
else:
if not os.path.isfile(db_file) or len(fn_list) == 0:
pass
else:
if allow_retries:
- # split the list of filenames up into appropciately sized chunks, so that
- # concurrent delete commands each have a chance to complete:
- # 200 is arbriatily chosen, but seems to work
+ # split the list of filenames up into appropciately sized chunks, so that
+ # concurrent delete commands each have a chance to complete:
+ # 200 is arbriatily chosen, but seems to work
chunk_size = 200
chunks = __gen_chunk_of_list(fn_list, chunk_size)
for chunk_o_filenames in chunks:
- # within each chunk of files, need to open the db, with time out retries etc:
+ # within each chunk of files, need to open the db, with time out retries etc:
n_tries = 1
- wrote_db = False
+ wrote_db = False
while not wrote_db and n_tries <= db_attempts:
try:
- # open the database
+ # open the database
dbcn, dbcr = open_db_file(db_file, timeout=db_timeout)
- # go through the file chunk, one by one, and delete:
+ # go through the file chunk, one by one, and delete:
for fname in chunk_o_filenames:
try:
dbcr.execute(del_cmd.format(SQLITE_IMG_INFO_TABLE), (fname,))
except sqlite3.OperationalError as op_err_file:
- err_check = 'no such table: {}'.format(SQLITE_IMG_INFO_TABLE)
+ err_check = 'no such table: {}'.format(SQLITE_IMG_INFO_TABLE)
if op_err_file.message == err_check:
- # the db file exists, but it doesn't have anything in it:
+ # the db file exists, but it doesn't have anything in it:
if not skip_warning:
- msg = ('WARNING: Unable to delete file entry "{}" from'
- ' database "{}" as database table is missing')
- print(msg.format(fname, db_file))
+ msg = ('WARNING: Unable to delete file entry "{}" from'
+ ' database "{}" as database table is missing')
+ print(msg.format(fname, db_file))
return
else:
if not skip_warning:
- # if this fails, print a warning...
- # need to figure out why this happens
- msg = ('WARNING: unable to delete file entry:'
- ' "{}", type "{}" from database')
- print(msg.format(fname, type(fname)))
+ # if this fails, print a warning...
+ # need to figure out why this happens
+ msg = ('WARNING: unable to delete file entry:'
+ ' "{}", type "{}" from database')
+ print(msg.format(fname, type(fname)))
dbcn.commit()
- # if we got here, then we're good!
- wrote_db = True
- # finally close (for this chunk)
+ # if we got here, then we're good!
+ wrote_db = True
+ # finally close (for this chunk)
dbcn.close()
except sqlite3.OperationalError as op_err:
- if 'database is locked' in op_err.message:
- # database being locked is what the retries and timeouts are for:
- print('%s database timeout deleting from file "%s", %s s' \
+ if 'database is locked' in op_err.message:
+ # database being locked is what the retries and timeouts are for:
+ print('%s database timeout deleting from file "%s", %s s' \
% (dt_now_str(), db_file, n_tries * db_timeout))
n_tries += 1
- elif 'disk I/O error' in op_err.message:
- msg = '{} for file {}'.format(op_err.message, db_file)
+ elif 'disk I/O error' in op_err.message:
+ msg = '{} for file {}'.format(op_err.message, db_file)
raise IOError(msg)
else:
- # everything else needs to be reported and raised immediately:
- msg = '{} for file {}'.format(op_err.message, db_file)
+ # everything else needs to be reported and raised immediately:
+ msg = '{} for file {}'.format(op_err.message, db_file)
raise ValueError(msg)
- # if we went through all the attempts then it is time to raise the error:
+ # if we went through all the attempts then it is time to raise the error:
if n_tries > db_attempts:
- msg = '{} for file {}'.format(op_err.message, db_file)
+ msg = '{} for file {}'.format(op_err.message, db_file)
raise sqlite3.OperationalError(msg)
else:
- # just open the database:
+ # just open the database:
dbcn, dbcr = open_db_file(db_file)
- # delete the contents:
+ # delete the contents:
for i_fn, fname in enumerate(fn_list):
try:
dbcr.execute(del_cmd.format(SQLITE_IMG_INFO_TABLE), (fname,))
except:
if not skip_warning:
- # if this fails, print a warning...
- # need to figure out why this happens
- msg = ('WARNING: unable to delete file entry:'
- ' "{}", type "{}" from database')
- print(msg.format(fname, type(fname)))
- # commit every 100 to give other processes a chance:
+ # if this fails, print a warning...
+ # need to figure out why this happens
+ msg = ('WARNING: unable to delete file entry:'
+ ' "{}", type "{}" from database')
+ print(msg.format(fname, type(fname)))
+ # commit every 100 to give other processes a chance:
if i_fn % 100 == 0:
dbcn.commit()
time.sleep(1)
- # commit, and vacuum if required:
+ # commit, and vacuum if required:
dbcn.commit()
if do_vacuum:
if allow_retries:
- # need to re-open the db, if we allowed retries:
+ # need to re-open the db, if we allowed retries:
dbcn, dbcr = open_db_file(db_file)
- dbcn.execute("VACUUM")
+ dbcn.execute("VACUUM")
dbcn.close()
elif not allow_retries:
- dbcn.close()
-
+ dbcn.close()
+
def __gen_chunk_of_list(in_list, chunk_size):
- 'gnerator that yields a chunk of list, of length chunk size'
+ 'gnerator that yields a chunk of list, of length chunk size'
for ndx in range(0, len(in_list), chunk_size):
yield in_list[ndx:min(ndx + chunk_size, len(in_list))]
@@ -679,19 +681,19 @@ Source code for ImageMetaTag.db
Returns the output, processed by :func:`ImageMetaTag.db.process_select_star_from`
'''
- if db_file is None:
- sel_results = None
+ if db_file is None:
+ sel_results = None
else:
if not os.path.isfile(db_file):
- sel_results = None
+ sel_results = None
else:
- # just open the database:
+ # just open the database:
dbcn, dbcr = open_db_file(db_file)
- # do the select:
+ # do the select:
sel_results = select_dbcr_by_tags(dbcr, select_tags)
dbcn.close()
- return sel_results
-
+ return sel_results
+
[docs]def select_dbcr_by_tags(dbcr, select_tags):
'''
Selects from an open database cursor (dbcr) the entries that match a dict of field
@@ -700,43 +702,43 @@ Source code for ImageMetaTag.db
Returns the output, processed by :func:`ImageMetaTag.db.process_select_star_from`
'''
if len(select_tags) == 0:
- # just read and return the whole thing:
+ # just read and return the whole thing:
return read_img_info_from_dbcursor(dbcr)
else:
- # convert these to lists:
+ # convert these to lists:
tag_names = list(select_tags.keys())
tag_values = [select_tags[x] for x in tag_names]
- # Right... this is where I need to understand how to do a select!
- #select_command = 'SELECT * FROM %s WHERE symbol=?' % SQLITE_IMG_INFO_TABLE
- select_command = 'SELECT * FROM %s WHERE ' % SQLITE_IMG_INFO_TABLE
+ # Right... this is where I need to understand how to do a select!
+ #select_command = 'SELECT * FROM %s WHERE symbol=?' % SQLITE_IMG_INFO_TABLE
+ select_command = 'SELECT * FROM %s WHERE ' % SQLITE_IMG_INFO_TABLE
n_tags = len(tag_names)
use_tag_values = []
for i_tag, tag_name, tag_val in zip(list(range(n_tags)), tag_names, tag_values):
if isinstance(tag_val, (list, tuple)):
- # if a list or tuple, then use IN:
- select_command += '%s IN (' % info_key_to_db_name(tag_name)
- select_command += ', '.join(['?']*len(tag_val))
- select_command += ')'
+ # if a list or tuple, then use IN:
+ select_command += '%s IN (' % info_key_to_db_name(tag_name)
+ select_command += ', '.join(['?']*len(tag_val))
+ select_command += ')'
if i_tag+1 < n_tags:
- select_command += ' AND '
+ select_command += ' AND '
use_tag_values.extend(tag_val)
else:
- # do an exact match:
+ # do an exact match:
if i_tag+1 < n_tags:
- select_command += '%s = ? AND ' % info_key_to_db_name(tag_name)
+ select_command += '%s = ? AND ' % info_key_to_db_name(tag_name)
else:
- select_command += '%s = ?' % info_key_to_db_name(tag_name)
+ select_command += '%s = ?' % info_key_to_db_name(tag_name)
use_tag_values.append(tag_val)
db_contents = dbcr.execute(select_command, use_tag_values).fetchall()
- # and convert that to a useful dict/list combo:
+ # and convert that to a useful dict/list combo:
filename_list, out_dict = process_select_star_from(db_contents, dbcr)
- return filename_list, out_dict
-
-[docs]def scan_dir_for_db(basedir, db_file, img_tag_req=None, subdir_excl_list=None,
- known_file_tags=None, verbose=False, no_file_ext=False,
- return_timings=False, restart_db=False):
+ return filename_list, out_dict
+
+[docs]def scan_dir_for_db(basedir, db_file, img_tag_req=None, subdir_excl_list=None,
+ known_file_tags=None, verbose=False, no_file_ext=False,
+ return_timings=False, restart_db=False):
'''
A useful utility that scans a directory on disk for images that can go into a database.
This should only be used to build a database from a directory of tagged images that
@@ -767,10 +769,10 @@ Source code for ImageMetaTag.db
'''
if os.path.isfile(db_file) and not restart_db:
- raise ValueError('''scan_dir_for_db will not work on a pre-existing file unless restart_db
-is True, in which case the database file will be restarted as empty. Use with care.''')
+ raise ValueError('''scan_dir_for_db will not work on a pre-existing file unless restart_db
+is True, in which case the database file will be restarted as empty. Use with care.''')
- if known_file_tags is not None:
+ if known_file_tags is not None:
known_files = list(known_file_tags.keys())
else:
known_files = []
@@ -778,56 +780,56 @@ Source code for ImageMetaTag.db
if return_timings:
prev_time = datetime.now()
add_interval = 1
- # total number of entries added
+ # total number of entries added
n_added = 0
- # number of entries added since last timer
+ # number of entries added since last timer
n_add_this_timer = 0
- # and this is the list to return:
+ # and this is the list to return:
n_adds = []
timings_per_add = []
os.chdir(basedir)
- first_img = True
- for root, dirs, files in os.walk('./', followlinks=True, topdown=True):
- if not subdir_excl_list is None:
+ first_img = True
+ for root, dirs, files in os.walk('./', followlinks=True, topdown=True):
+ if not subdir_excl_list is None:
dirs[:] = [d for d in dirs if not d in subdir_excl_list]
for meta_img_format in META_IMG_FORMATS:
- for filename in fnmatch.filter(files, '*%s' % meta_img_format):
- # append to the list, taking off the preceeding './' and the file extension:
- if root == './':
+ for filename in fnmatch.filter(files, '*%s' % meta_img_format):
+ # append to the list, taking off the preceeding './' and the file extension:
+ if root == './':
img_path = filename
else:
- img_path = '%s/%s' % (root[2:], filename)
+ img_path = '%s/%s' % (root[2:], filename)
if no_file_ext:
img_name = os.path.splitext(img_path)[0]
else:
img_name = img_path
- # read the metadata:
+ # read the metadata:
if img_name in known_files:
- # if we know this file details, then get it:
+ # if we know this file details, then get it:
known_files.remove(img_name)
img_info = known_file_tags.pop(img_name)
- read_ok = True
+ read_ok = True
else:
- # otherwise read from disk:
+ # otherwise read from disk:
(read_ok, img_info) = readmeta_from_image(img_path)
if read_ok:
if img_tag_req:
- # check to see if an image is needed:
+ # check to see if an image is needed:
use_img = check_for_required_keys(img_info, img_tag_req)
else:
- use_img = True
+ use_img = True
if use_img:
if first_img:
db_cn, db_cr = open_or_create_db_file(db_file, img_info,
- restart_db=True)
- first_img = False
+ restart_db=True)
+ first_img = False
write_img_to_open_db(db_cr, img_name, img_info)
if verbose:
- print(img_name)
+ print(img_name)
if return_timings:
n_added += 1
@@ -836,23 +838,23 @@ Source code for ImageMetaTag.db
time_interval_s = (datetime.now()- prev_time).total_seconds()
timings_per_add.append(time_interval_s / add_interval)
n_adds.append(n_added)
- # increase the add_interval so we don't swamp
- # the processing with timings!
+ # increase the add_interval so we don't swamp
+ # the processing with timings!
add_interval = np.ceil(np.sqrt(n_added))
n_add_this_timer = 0
if verbose:
- print('len(n_adds)=%s, currently every %s' \
+ print('len(n_adds)=%s, currently every %s' \
% (len(n_adds), add_interval))
- # commit and close, and we are done:
+ # commit and close, and we are done:
if not first_img:
db_cn.commit()
db_cn.close()
if return_timings:
- return n_adds, timings_per_add
-
+ return n_adds, timings_per_add
+
def rmfile(path):
"""
os.remove, but does not complain if the file has already been
@@ -866,8 +868,8 @@ Source code for ImageMetaTag.db
else: raise
def dt_now_str():
- 'returns datetime.now(), as a string, in a common format'
- return datetime.now().strftime('%Y-%m-%d %H:%M:%S')
+ 'returns datetime.now(), as a string, in a common format'
+ return datetime.now().strftime('%Y-%m-%d %H:%M:%S')
@@ -875,7 +877,7 @@ Source code for ImageMetaTag.db
-
+
Navigation
-
@@ -884,13 +886,14 @@
Navigation
-
modules |
- - ImageMetaTag 0.7.3 documentation »
- - Module code »
+ - ImageMetaTag 0.7.3 documentation »
+ - Module code »
+ - ImageMetaTag »
-
- © Copyright 2015-2018, British Crown Copyright.
- Created using Sphinx 1.4.8.
+
+ © Copyright 2015-2018, British Crown Copyright.
+ Created using Sphinx 1.2.2.
\ No newline at end of file
diff --git a/docs/build/html/_modules/ImageMetaTag/webpage.html b/docs/build/html/_modules/ImageMetaTag/webpage.html
index 02cb8aa..aaa9d89 100644
--- a/docs/build/html/_modules/ImageMetaTag/webpage.html
+++ b/docs/build/html/_modules/ImageMetaTag/webpage.html
@@ -6,7 +6,7 @@
- ImageMetaTag.webpage — ImageMetaTag 0.7.3 documentation
+ ImageMetaTag.webpage — ImageMetaTag 0.7.3 documentation
@@ -23,13 +23,11 @@
-
-
-
+
-
-
+
+
Navigation
-
@@ -38,23 +36,27 @@
Navigation
-
modules |
- - ImageMetaTag 0.7.3 documentation »
- - Module code »
+ - ImageMetaTag 0.7.3 documentation »
+ - Module code »
+ - ImageMetaTag »
-
+
-
+
Quick search
+
+ Enter search terms or a module, class or function name.
+
@@ -63,10 +65,10 @@ Quick search
-
+
Source code for ImageMetaTag.webpage
-'''
+'''
This sub-module contains functions to write out an :class:`ImageMetaTag.ImageDict` to a webpage.
The webpages are made up of a single .html file, which is the page to be loaded to view the images.
@@ -108,30 +110,30 @@ Source code for ImageMetaTag.webpage
'''
import os, json, pdb, shutil, tempfile, copy, zlib
-import numpy as np
-import ImageMetaTag as imt
+import numpy as np
+import ImageMetaTag as imt
-from multiprocessing import Pool
+from multiprocessing import Pool
-# single indent to be used on the output webpage
-INDENT = ' '
+# single indent to be used on the output webpage
+INDENT = ' '
LEN_INDENT = len(INDENT)
-# for compressed json files, we use pako to inflate the data back to full size:
-PAKO_JS_FILE = 'pako_inflate.js'
-PAKO_RELEASE = '1.0.5'
-PAKO_SOURE_TAR = 'https://github.com/nodeca/pako/archive/{}.tar.gz'.format(PAKO_RELEASE)
+# for compressed json files, we use pako to inflate the data back to full size:
+PAKO_JS_FILE = 'pako_inflate.js'
+PAKO_RELEASE = '1.0.5'
+PAKO_SOURE_TAR = 'https://github.com/nodeca/pako/archive/{}.tar.gz'.format(PAKO_RELEASE)
-[docs]def write_full_page(img_dict, filepath, title, page_filename=None, tab_s_name=None,
- preamble=None, postamble=None, postamble_no_imt_link=False,
- compression=False,
- initial_selectors=None, show_selector_names=False,
- show_singleton_selectors=True, optgroups=None,
- url_type='int', only_show_rel_url=False, verbose=False,
- style='horiz dropdowns', write_intmed_tmpfile=False,
- description=None, keywords=None, css=None):
+[docs]def write_full_page(img_dict, filepath, title, page_filename=None, tab_s_name=None,
+ preamble=None, postamble=None, postamble_no_imt_link=False,
+ compression=False,
+ initial_selectors=None, show_selector_names=False,
+ show_singleton_selectors=True, optgroups=None,
+ url_type='int', only_show_rel_url=False, verbose=False,
+ style='horiz dropdowns', write_intmed_tmpfile=False,
+ description=None, keywords=None, css=None):
'''
Writes out an :class:`ImageMetaTag.ImageDict` as a webpage, to a given file location.
The files are created as temporary files and when complete they replace any files that
@@ -182,178 +184,178 @@ Source code for ImageMetaTag.webpage
page_dependencies = []
- if not (isinstance(img_dict, imt.ImageDict) or img_dict is None):
- raise ValueError('write_full_page works on an ImageMetaTag ImageDict.')
+ if not (isinstance(img_dict, imt.ImageDict) or img_dict is None):
+ raise ValueError('write_full_page works on an ImageMetaTag ImageDict.')
- if page_filename is None:
+ if page_filename is None:
page_filename = os.path.basename(filepath)
if not page_filename:
- msg = 'filepath ({})" must specify a file (not a directory'
+ msg = 'filepath ({})" must specify a file (not a directory'
raise ValueError(msg.format(filepath))
- # other files involved:
+ # other files involved:
file_dir, file_name = os.path.split(filepath)
page_dependencies.append(file_name)
- if img_dict is None:
+ if img_dict is None:
json_files = []
else:
- # now make sure the required javascript library is copied over to the file_dir:
+ # now make sure the required javascript library is copied over to the file_dir:
js_files = copy_required_javascript(file_dir, style, compression=compression)
page_dependencies.extend(js_files)
- # we have real data to work with:
- # this tests the dict has uniform_depth, which is needed for all current webpages.
- dict_depth = img_dict.dict_depth(uniform_depth=True)
- # work out what files we need to create:
+ # we have real data to work with:
+ # this tests the dict has uniform_depth, which is needed for all current webpages.
+ dict_depth = img_dict.dict_depth(uniform_depth=True)
+ # work out what files we need to create:
file_name_no_ext = os.path.splitext(file_name)[0]
- # json file to hold the image_dict branching data etc:
+ # json file to hold the image_dict branching data etc:
json_file_no_ext = os.path.join(file_dir, file_name_no_ext)
json_files = write_json(img_dict, json_file_no_ext, compression=compression)
- # the final page is dependent on the final locations of the json files,
- # relative to the html:
+ # the final page is dependent on the final locations of the json files,
+ # relative to the html:
page_dependencies.extend([os.path.split(x[1])[1] for x in json_files])
- # this is the internal name the different selectors, associated lists for the selectors, and
- # the list of files (all with a numbered suffix):
- selector_prefix = 'sel'
- url_separator = '|'
+ # this is the internal name the different selectors, associated lists for the selectors, and
+ # the list of files (all with a numbered suffix):
+ selector_prefix = 'sel'
+ url_separator = '|'
- # now write the actual output file:
+ # now write the actual output file:
if write_intmed_tmpfile:
- # get a temporary file:
- with tempfile.NamedTemporaryFile('w', suffix='.html', prefix='imt_tmppage_',
- dir=file_dir, delete=False) as html_file_obj:
+ # get a temporary file:
+ with tempfile.NamedTemporaryFile('w', suffix='.html', prefix='imt_tmppage_',
+ dir=file_dir, delete=False) as html_file_obj:
tmp_html_filepath = html_file_obj.name
filepath_to_write = tmp_html_filepath
else:
filepath_to_write = filepath
- # start the indent:
- ind = ''
-
- # open the file - this is a nice and simple file so just use the with open...
- with open(filepath_to_write, 'w') as out_file:
- # write out the start of the file:
- out_file.write('<!DOCTYPE html>\n')
- out_file.write(ind + '<html>\n')
- # increase the indent level:
+ # start the indent:
+ ind = ''
+
+ # open the file - this is a nice and simple file so just use the with open...
+ with open(filepath_to_write, 'w') as out_file:
+ # write out the start of the file:
+ out_file.write('<!DOCTYPE html>\n')
+ out_file.write(ind + '<html>\n')
+ # increase the indent level:
ind = _indent_up_one(ind)
- out_file.write(ind + '<head>\n')
+ out_file.write(ind + '<head>\n')
ind = _indent_up_one(ind)
- if title is not None:
- out_file.write('{}<title>{}</title>\n'.format(ind, title))
- out_str = ind+'<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">\n'
+ if title is not None:
+ out_file.write('{}<title>{}</title>\n'.format(ind, title))
+ out_str = ind+'<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">\n'
out_file.write(out_str)
if css:
- # copy the css file into the file_dir (unless that's were it already is):
+ # copy the css file into the file_dir (unless that's were it already is):
try:
shutil.copy(css, file_dir)
except shutil.Error as sh_err:
if imt.PY3:
- if 'are the same file' in sh_err.__str__():
+ if 'are the same file' in sh_err.__str__():
pass
else:
raise sh_err
else:
- if 'are the same file' in sh_err.message:
+ if 'are the same file' in sh_err.message:
pass
else:
raise sh_err
base_css = os.path.basename(css)
page_dependencies.append(base_css)
- out_str = ind+'<link rel="stylesheet" type="text/css" href="{0}">\n'
+ out_str = ind+'<link rel="stylesheet" type="text/css" href="{0}">\n'
out_file.write(out_str.format(base_css))
else:
- if style == 'horiz dropdowns':
- # write out a little css at the top:
- css = '''{0}<style>
-{0} body {{
-{0} background-color: #ffffff;
-{0} color: #000000;
-{0} }}
-{0} body, div, dl, dt, dd, li, h1, h2 {{
-{0} margin: 0;
-{0} padding: 0;
-{0} }}
-{0} h3, h4, h5, h6, pre, form, fieldset, input {{'
-{0} margin: 0;
-{0} padding: 0;
-{0} }}
-{0} textarea, p, blockquote {{
-{0} margin: 0;
-{0} padding: 0;
-{0} }}
-{0} th, td {{
-{0} margin: 0;
-{0} padding: 0;
-{0} vertical-align: top;
-{0} }}
-{0} fieldset, img {{
-{0} border: 0 none;
-{0} vertical-align: top;
-{0} }}
-{0} body {{
-{0} font: 12px Myriad,Helvetica,Tahoma,Arial,clean,sans-serif;
-{0} *font-size: 75%;
-{0} }}
-{0} h1 {{
-{0} font-size: 1.5em;
-{0} font-weight: normal;
-{0} line-height: 1em;
-{0} margin-top: 1em;
-{0} margin-bottom:0;
-{0} }}
-{0} h2 {{
-{0} font-size: 1.1667em;
-{0} font-weight: bold;
-{0} line-height: 1.286em;
-{0} margin-top: 1.929em;
-{0} margin-bottom:0.643em;
-{0} }}
-{0} h3, h4, h5, h6 {{
-{0} font-size: 1em;
-{0} font-weight: bold;
-{0} line-height: 1.5em;
-{0} margin-top: 1.5em;
-{0} margin-bottom: 0;
-{0} }}
-{0} p {{
-{0} font-size: 1em;
-{0} margin-top: 1.5em;
-{0} margin-bottom: 1.5em;
-{0} line-height: 1.5em;
-{0} }}
-{0} pre, code {{
-{0} font-size:115%;
-{0} *font-size:100%;
-{0} font-family: Courier, "Courier New";
-{0} background-color: #efefef;
-{0} border: 1px solid #ccc;
-{0} }}
-{0} pre {{
-{0} border-width: 1px 0;
-{0} padding: 1.5em;
-{0} }}
-{0} table {{
-{0} font-size:100%;
-{0} }}
-{0}</style>
-'''
+ if style == 'horiz dropdowns':
+ # write out a little css at the top:
+ css = '''{0}<style>
+{0} body {{
+{0} background-color: #ffffff;
+{0} color: #000000;
+{0} }}
+{0} body, div, dl, dt, dd, li, h1, h2 {{
+{0} margin: 0;
+{0} padding: 0;
+{0} }}
+{0} h3, h4, h5, h6, pre, form, fieldset, input {{'
+{0} margin: 0;
+{0} padding: 0;
+{0} }}
+{0} textarea, p, blockquote {{
+{0} margin: 0;
+{0} padding: 0;
+{0} }}
+{0} th, td {{
+{0} margin: 0;
+{0} padding: 0;
+{0} vertical-align: top;
+{0} }}
+{0} fieldset, img {{
+{0} border: 0 none;
+{0} vertical-align: top;
+{0} }}
+{0} body {{
+{0} font: 12px Myriad,Helvetica,Tahoma,Arial,clean,sans-serif;
+{0} *font-size: 75%;
+{0} }}
+{0} h1 {{
+{0} font-size: 1.5em;
+{0} font-weight: normal;
+{0} line-height: 1em;
+{0} margin-top: 1em;
+{0} margin-bottom:0;
+{0} }}
+{0} h2 {{
+{0} font-size: 1.1667em;
+{0} font-weight: bold;
+{0} line-height: 1.286em;
+{0} margin-top: 1.929em;
+{0} margin-bottom:0.643em;
+{0} }}
+{0} h3, h4, h5, h6 {{
+{0} font-size: 1em;
+{0} font-weight: bold;
+{0} line-height: 1.5em;
+{0} margin-top: 1.5em;
+{0} margin-bottom: 0;
+{0} }}
+{0} p {{
+{0} font-size: 1em;
+{0} margin-top: 1.5em;
+{0} margin-bottom: 1.5em;
+{0} line-height: 1.5em;
+{0} }}
+{0} pre, code {{
+{0} font-size:115%;
+{0} *font-size:100%;
+{0} font-family: Courier, "Courier New";
+{0} background-color: #efefef;
+{0} border: 1px solid #ccc;
+{0} }}
+{0} pre {{
+{0} border-width: 1px 0;
+{0} padding: 1.5em;
+{0} }}
+{0} table {{
+{0} font-size:100%;
+{0} }}
+{0}</style>
+'''
out_file.write(css.format(ind))
- # now write out the specific stuff to the html header:
- if img_dict is None:
- # an empty img_dict needs very little:
+ # now write out the specific stuff to the html header:
+ if img_dict is None:
+ # an empty img_dict needs very little:
write_js_to_header(img_dict,
file_obj=out_file,
pagename=page_filename, tabname=tab_s_name,
ind=ind,
description=description, keywords=keywords)
else:
- # the json_files is a list of (tmp_file, final_file) tuples.
- # Here we want the final one:
+ # the json_files is a list of (tmp_file, final_file) tuples.
+ # Here we want the final one:
final_json_files = [os.path.split(x[1])[1] for x in json_files]
write_js_to_header(img_dict, initial_selectors=initial_selectors, optgroups=optgroups,
file_obj=out_file, json_files=final_json_files, js_files=js_files,
@@ -363,73 +365,73 @@ Source code for ImageMetaTag.webpage
url_type=url_type, only_show_rel_url=only_show_rel_url,
style=style, ind=ind, compression=compression,
description=description, keywords=keywords)
- # now close the script and head:
+ # now close the script and head:
ind = _indent_down_one(ind)
- out_file.write(ind + '</script>\n')
+ out_file.write(ind + '</script>\n')
ind = _indent_down_one(ind)
- out_file.write(ind + '</head>\n')
+ out_file.write(ind + '</head>\n')
- # now start the body:
- out_file.write('{}<body>\n'.format(ind))
+ # now start the body:
+ out_file.write('{}<body>\n'.format(ind))
- # the preamble is the first thing to go in the body:
- if preamble is not None:
- out_file.write(preamble + '\n')
- # now the img_dict content:
- if img_dict is None:
- out_file.write('<p><h1>No images are available for this page.</h1></p>')
+ # the preamble is the first thing to go in the body:
+ if preamble is not None:
+ out_file.write(preamble + '\n')
+ # now the img_dict content:
+ if img_dict is None:
+ out_file.write('<p><h1>No images are available for this page.</h1></p>')
else:
- # now write out the end, which includes the placeholders for the actual
- # stuff that appears on the page:
+ # now write out the end, which includes the placeholders for the actual
+ # stuff that appears on the page:
if show_selector_names:
level_names = img_dict.level_names
else:
- level_names = False
- # if we're labelling selectors, and we have an animator button, label that too:
+ level_names = False
+ # if we're labelling selectors, and we have an animator button, label that too:
if img_dict.selector_animated > 1 and show_selector_names:
anim_level = level_names[img_dict.selector_animated]
else:
- anim_level = None
+ anim_level = None
write_js_placeholders(img_dict, file_obj=out_file, dict_depth=img_dict.dict_depth(),
style=style, level_names=level_names,
show_singleton_selectors=show_singleton_selectors,
animated_level=anim_level)
- # the body is done, so the postamble comes in:
- postamble_endline = 'Page created with <a href="{}">ImageMetaTag {}</a>'
+ # the body is done, so the postamble comes in:
+ postamble_endline = 'Page created with <a href="{}">ImageMetaTag {}</a>'
postamble_endline = postamble_endline.format(imt.__documentation__, imt.__version__)
if not postamble_no_imt_link:
- if postamble is None:
+ if postamble is None:
postamble = postamble_endline
else:
- postamble = '{}\n{}'.format(postamble, postamble_endline)
- if postamble is not None:
- out_file.write(postamble + '\n')
- # finish the body, and html:
- out_file.write(ind + '</body>\n')
- out_file.write('\n</html>')
+ postamble = '{}\n{}'.format(postamble, postamble_endline)
+ if postamble is not None:
+ out_file.write(postamble + '\n')
+ # finish the body, and html:
+ out_file.write(ind + '</body>\n')
+ out_file.write('\n</html>')
if write_intmed_tmpfile:
tmp_files_to_mv = json_files + [(tmp_html_filepath, filepath)]
else:
tmp_files_to_mv = json_files
for tmp_file_mv in tmp_files_to_mv:
- # now move the json, then the html files:
- os.chmod(tmp_file_mv[0], 0o644)
+ # now move the json, then the html files:
+ os.chmod(tmp_file_mv[0], 0o644)
shutil.move(tmp_file_mv[0], tmp_file_mv[1])
if verbose:
- print('File "%s" complete.' % filepath)
-
- return page_dependencies
+ print('File "%s" complete.' % filepath)
-[docs]def write_js_to_header(img_dict, initial_selectors=None, optgroups=None, style=None,
- file_obj=None, json_files=None, js_files=None,
- pagename=None, tabname=None, selector_prefix=None,
- show_singleton_selectors=True,
- url_separator='|', url_type='str', only_show_rel_url=False,
- ind=None, compression=False,
- description=None, keywords=None):
+ return page_dependencies
+
+[docs]def write_js_to_header(img_dict, initial_selectors=None, optgroups=None, style=None,
+ file_obj=None, json_files=None, js_files=None,
+ pagename=None, tabname=None, selector_prefix=None,
+ show_singleton_selectors=True,
+ url_separator='|', url_type='str', only_show_rel_url=False,
+ ind=None, compression=False,
+ description=None, keywords=None):
'''
Writes out the required ImageMetaTag config and data paths into a html header section
for an input :class:`ImageMetaTag.ImageDict`.
@@ -469,196 +471,196 @@ Source code for ImageMetaTag.webpage
* description - html description metadata7
* keywords - html keyword metadata
'''
- if not (isinstance(img_dict, imt.ImageDict) or img_dict is None):
- raise ValueError('Input img_dict is not an ImageMetaTag ImageDict')
+ if not (isinstance(img_dict, imt.ImageDict) or img_dict is None):
+ raise ValueError('Input img_dict is not an ImageMetaTag ImageDict')
- if ind is None:
- ind = ''
+ if ind is None:
+ ind = ''
- if description is not None:
- file_obj.write('{}<meta name="description" content="{}">\n'.format(ind, description))
- if keywords is not None:
- file_obj.write('{}<meta name="keywords" content="{}">\n'.format(ind, keywords))
+ if description is not None:
+ file_obj.write('{}<meta name="description" content="{}">\n'.format(ind, description))
+ if keywords is not None:
+ file_obj.write('{}<meta name="keywords" content="{}">\n'.format(ind, keywords))
- if img_dict is not None:
- ## add a reference to the data structure:
- #out_str = '{}<script type="text/javascript" src="{}"></script>\n'.format(ind, json_files)
- #file_obj.write(out_str)
+ if img_dict is not None:
+ ## add a reference to the data structure:
+ #out_str = '{}<script type="text/javascript" src="{}"></script>\n'.format(ind, json_files)
+ #file_obj.write(out_str)
- # now add a reference to the javascript functions to implement the style:
+ # now add a reference to the javascript functions to implement the style:
for js_file in js_files:
- out_str = '{}<script type="text/javascript" src="{}"></script>\n'.format(ind, js_file)
+ out_str = '{}<script type="text/javascript" src="{}"></script>\n'.format(ind, js_file)
file_obj.write(out_str)
- # now write out the javascript configuration variables:
- file_obj.write(ind + '<script type="text/javascript">\n')
+ # now write out the javascript configuration variables:
+ file_obj.write(ind + '<script type="text/javascript">\n')
ind = _indent_up_one(ind)
- # define, read in and parse the json file:
- out_str = '''{0}var json_files = {1};
-{0}var zl_unpack = {2};
-{0}imt = read_parse_json_files(json_files, zl_unpack);
-'''
+ # define, read in and parse the json file:
+ out_str = '''{0}var json_files = {1};
+{0}var zl_unpack = {2};
+{0}imt = read_parse_json_files(json_files, zl_unpack);
+'''
file_obj.write(out_str.format(ind, json_files, _py_to_js_bool(bool(compression))))
- # in case the page we are writing is embedded as a frame, write out the top
- # level page here;
- file_obj.write('{}var pagename = "{}"\n'.format(ind, pagename))
- # the tab name is used in setting up the URL in nested frames:
- file_obj.write('{}var tab_name = "{}";\n'.format(ind, tabname))
+ # in case the page we are writing is embedded as a frame, write out the top
+ # level page here;
+ file_obj.write('{}var pagename = "{}"\n'.format(ind, pagename))
+ # the tab name is used in setting up the URL in nested frames:
+ file_obj.write('{}var tab_name = "{}";\n'.format(ind, tabname))
dict_depth = img_dict.dict_depth()
- # the key_to_selector variable is what maps each set of keys onto a selector on the page:
+ # the key_to_selector variable is what maps each set of keys onto a selector on the page:
key_to_selector = str([selector_prefix + str(x) for x in range(dict_depth)])
- file_obj.write('{}var key_to_selector = {};\n'.format(ind, key_to_selector))
- # this determines whether a selector uses the animation controls on a page:
- file_obj.write('{}var anim_sel = {};\n'.format(ind, img_dict.selector_animated))
- # and the direction the animation runs in:
- file_obj.write('{}var anim_dir = {};\n'.format(ind, img_dict.animation_direction))
- # and the direction the animation runs in:
-
- # the url_separator is the text character that goes between the variables in the url:
- if url_separator == '&':
- msg = 'Cannot use "&" as the url_separator, as some strings will '
- msg += 'become html special characters. For instance ¶-global '
- msg += 'will be treated as a paragraph then -global, not the intended string.'
+ file_obj.write('{}var key_to_selector = {};\n'.format(ind, key_to_selector))
+ # this determines whether a selector uses the animation controls on a page:
+ file_obj.write('{}var anim_sel = {};\n'.format(ind, img_dict.selector_animated))
+ # and the direction the animation runs in:
+ file_obj.write('{}var anim_dir = {};\n'.format(ind, img_dict.animation_direction))
+ # and the direction the animation runs in:
+
+ # the url_separator is the text character that goes between the variables in the url:
+ if url_separator == '&':
+ msg = 'Cannot use "&" as the url_separator, as some strings will '
+ msg += 'become html special characters. For instance ¶-global '
+ msg += 'will be treated as a paragraph then -global, not the intended string.'
raise ValueError(msg)
- file_obj.write('{}var url_separator = "{}";\n'.format(ind, url_separator))
-
- # the url_type determines whether the url is full of integers (int), with meaningful
- # values internally or text which looks more meaningful to the user:
- file_obj.write('{}var url_type = "{}";\n'.format(ind, url_type))
- # the show_rel_url logical (converted to a string to init a javascript bool)
- file_obj.write('{}var show_rel_url = {};\n'.format(ind, _py_to_js_bool(only_show_rel_url)))
-
- # the selected_id needs to be defined here too, as it's used as a global variable
- # (it will be overwritten later if the URL changes it, and when selectors change it):
- if initial_selectors is None:
- # if it's not set, then set it to something invalid, and the validator
- # in the javascript will sort it out. It MUST be the right length though:
- file_obj.write('{}var selected_id = {};\n'.format(ind, str([-1]*dict_depth)))
+ file_obj.write('{}var url_separator = "{}";\n'.format(ind, url_separator))
+
+ # the url_type determines whether the url is full of integers (int), with meaningful
+ # values internally or text which looks more meaningful to the user:
+ file_obj.write('{}var url_type = "{}";\n'.format(ind, url_type))
+ # the show_rel_url logical (converted to a string to init a javascript bool)
+ file_obj.write('{}var show_rel_url = {};\n'.format(ind, _py_to_js_bool(only_show_rel_url)))
+
+ # the selected_id needs to be defined here too, as it's used as a global variable
+ # (it will be overwritten later if the URL changes it, and when selectors change it):
+ if initial_selectors is None:
+ # if it's not set, then set it to something invalid, and the validator
+ # in the javascript will sort it out. It MUST be the right length though:
+ file_obj.write('{}var selected_id = {};\n'.format(ind, str([-1]*dict_depth)))
else:
if not isinstance(initial_selectors, list):
- msg = 'Input initial_selectors must be a list, of length the depth of the ImageDict'
+ msg = 'Input initial_selectors must be a list, of length the depth of the ImageDict'
raise ValueError(msg)
if len(initial_selectors) != img_dict.dict_depth():
- msg = 'Input initial_selectors must be a list, of length the depth of the ImageDict'
+ msg = 'Input initial_selectors must be a list, of length the depth of the ImageDict'
raise ValueError(msg)
- # the input can either be a list of integer indices, or strings that match:
+ # the input can either be a list of integer indices, or strings that match:
initial_selectors_as_inds = []
initial_selectors_as_string = []
for i_sel, sel_value in enumerate(initial_selectors):
if isinstance(sel_value, int):
if sel_value < 0 or sel_value >= len(img_dict.keys[i_sel]):
- raise ValueError('initial_selectors are out of range')
- # store the initial_selectors_as_inds
+ raise ValueError('initial_selectors are out of range')
+ # store the initial_selectors_as_inds
initial_selectors_as_inds.append(sel_value)
- # and as a string:
+ # and as a string:
initial_selectors_as_string.append(img_dict.keys[i_sel][sel_value])
else:
- # get the index of that value:
+ # get the index of that value:
initial_selectors_as_inds.append(img_dict.keys[i_sel].index(sel_value))
- # and simple store the string:
+ # and simple store the string:
initial_selectors_as_string.append(sel_value)
- # check that's valid:
- if img_dict.return_from_list(initial_selectors_as_string) is None:
- raise ValueError('Input initial_selectors does not end up at a valid image/payload')
- # write that out:
- file_obj.write('{}var selected_id = {};\n'.format(ind, initial_selectors_as_inds))
+ # check that's valid:
+ if img_dict.return_from_list(initial_selectors_as_string) is None:
+ raise ValueError('Input initial_selectors does not end up at a valid image/payload')
+ # write that out:
+ file_obj.write('{}var selected_id = {};\n'.format(ind, initial_selectors_as_inds))
- # now write out the lists of keys, to the different levels:
+ # now write out the lists of keys, to the different levels:
keys_to_js = [str(x[1]) for x in img_dict.keys.items()]
- file_obj.write('{}var key_lists = [{},\n'.format(ind, keys_to_js[0]))
+ file_obj.write('{}var key_lists = [{},\n'.format(ind, keys_to_js[0]))
ind = _indent_up_one(ind)
for i_depth in range(1, dict_depth):
- file_obj.write('{}{},\n'.format(ind, keys_to_js[i_depth]))
+ file_obj.write('{}{},\n'.format(ind, keys_to_js[i_depth]))
ind = _indent_down_one(ind)
- file_obj.write(ind + '];\n')
+ file_obj.write(ind + '];\n')
- # now write out optgroups:
+ # now write out optgroups:
non_optgroup_elems = {}
if optgroups:
- # if the optgroup order hasn't been specified, then
- # the default is a sort:
+ # if the optgroup order hasn't been specified, then
+ # the default is a sort:
for group_ind, optgroup in optgroups.items():
- # keep a note of the elements in the whole list, so we know which ones
- # aren't in any optgroup:
+ # keep a note of the elements in the whole list, so we know which ones
+ # aren't in any optgroup:
all_keys = copy.deepcopy(img_dict.keys[group_ind])
- if 'imt_optgroup_order' not in optgroup:
- optgroup['imt_optgroup_order'] = sorted(optgroup.keys())
- # make sure that the elements within the optgroup are a sorted
- # list, sorted according to the order in the img_dict.keys()
+ if 'imt_optgroup_order' not in optgroup:
+ optgroup['imt_optgroup_order'] = sorted(optgroup.keys())
+ # make sure that the elements within the optgroup are a sorted
+ # list, sorted according to the order in the img_dict.keys()
for group_name, group_elements in optgroup.items():
- if group_name != 'imt_optgroup_order':
- # pick up the indices of the elements, in the main list of keys:
+ if group_name != 'imt_optgroup_order':
+ # pick up the indices of the elements, in the main list of keys:
elem_inds = [img_dict.keys[group_ind].index(x) for x in group_elements]
- # now sort by elem_inds
+ # now sort by elem_inds
sorted_elems = sorted(zip(elem_inds, group_elements))
- # and pull out the bit we need again:
+ # and pull out the bit we need again:
optgroup[group_name] = [x[1] for x in sorted_elems]
- # and make a list of those elements that aren't in any optgroup!
+ # and make a list of those elements that aren't in any optgroup!
for group_element in group_elements:
all_keys.remove(group_element)
non_optgroup_elems[group_ind] = all_keys
- # convert the optgroups to a list, in javascript, with each selector
- # having an element within it:
- optg_str = '['
- non_optg_str = '['
+ # convert the optgroups to a list, in javascript, with each selector
+ # having an element within it:
+ optg_str = '['
+ non_optg_str = '['
for i_depth in range(dict_depth):
if i_depth in optgroups:
- optg_str += json.dumps(optgroups[i_depth], separators=(',', ':'))
+ optg_str += json.dumps(optgroups[i_depth], separators=(',', ':'))
non_optg_str += str(non_optgroup_elems[i_depth])
else:
- # no optgroup for this selector:
- optg_str += '{}'
- non_optg_str += '[]'
+ # no optgroup for this selector:
+ optg_str += '{}'
+ non_optg_str += '[]'
if i_depth < dict_depth -1:
- # the final element mustn't have a comma or internet explorer will complain:
- optg_str += ','
- non_optg_str += ','
- # close the javascript list:
- optg_str += '];'
- non_optg_str += '];'
+ # the final element mustn't have a comma or internet explorer will complain:
+ optg_str += ','
+ non_optg_str += ','
+ # close the javascript list:
+ optg_str += '];'
+ non_optg_str += '];'
else:
- optg_str = '[' + '{},' * (dict_depth-1) + '{}]'
- non_optg_str = '[' + '[],' * (dict_depth-1) + '[]]'
- file_obj.write('{}var optgroups = {}\n'.format(ind, optg_str))
- file_obj.write('{}var optgroup_redisual = {}\n'.format(ind, non_optg_str))
- file_obj.write('{}var show_singleton_selectors = {};\n'.format(ind, int(show_singleton_selectors)))
-
- # now some top level things:
- if style == 'horiz dropdowns':
- file_obj.write('''
-{0}// other top level derived variables
-{0}// the depth of the ImageMetaTag ImageDict (number of selectors):
-{0}var n_deep = selected_id.length;
-{0}// a list of the options available to the animator buttons, with the current selection
-{0}var anim_options = [];
-{0}// the index of the current option for the animator:
-{0}var anim_ind = 0;
-'''.format(ind))
-
- # now, the main call:
- file_obj.write(ind + '// redefine onload, so it calls the imt_main to write the page:\n')
- file_obj.write(ind + 'window.onload = function() {imt_main();}\n')
- # END of the imt specifc header content:
-
-def write_js_setup_defaults(selector_prefix=None, list_prefix=None, file_list_name=None):
+ optg_str = '[' + '{},' * (dict_depth-1) + '{}]'
+ non_optg_str = '[' + '[],' * (dict_depth-1) + '[]]'
+ file_obj.write('{}var optgroups = {}\n'.format(ind, optg_str))
+ file_obj.write('{}var optgroup_redisual = {}\n'.format(ind, non_optg_str))
+ file_obj.write('{}var show_singleton_selectors = {};\n'.format(ind, int(show_singleton_selectors)))
+
+ # now some top level things:
+ if style == 'horiz dropdowns':
+ file_obj.write('''
+{0}// other top level derived variables
+{0}// the depth of the ImageMetaTag ImageDict (number of selectors):
+{0}var n_deep = selected_id.length;
+{0}// a list of the options available to the animator buttons, with the current selection
+{0}var anim_options = [];
+{0}// the index of the current option for the animator:
+{0}var anim_ind = 0;
+'''.format(ind))
+
+ # now, the main call:
+ file_obj.write(ind + '// redefine onload, so it calls the imt_main to write the page:\n')
+ file_obj.write(ind + 'window.onload = function() {imt_main();}\n')
+ # END of the imt specifc header content:
+
+def write_js_setup_defaults(selector_prefix=None, list_prefix=None, file_list_name=None):
'''
this specifies defaults for the internal names the different selectors, associated lists for
the selectors, and the list of files (all with a numbered suffix)
'''
- if selector_prefix is None:
- selector_prefix = 'sel'
- if list_prefix is None:
- list_prefix = 'list'
- if file_list_name is None:
- file_list_name = 'file_list'
+ if selector_prefix is None:
+ selector_prefix = 'sel'
+ if list_prefix is None:
+ list_prefix = 'list'
+ if file_list_name is None:
+ file_list_name = 'file_list'
return (selector_prefix, list_prefix, file_list_name)
-[docs]def write_json(img_dict, file_name_no_ext, compression=False,
- chunk_char_limit=1e7):
+[docs]def write_json(img_dict, file_name_no_ext, compression=False,
+ chunk_char_limit=1e7):
'''
Writes a json dump of the :class:`ImageMetaTag.ImageDict` tree strucuture
to a target file path.
@@ -672,55 +674,55 @@ Source code for ImageMetaTag.webpage
'''
def json_from_dict(in_dict):
- 'returns a json string from an input dict'
- return json.dumps(in_dict, separators=(',', ':'))
+ 'returns a json string from an input dict'
+ return json.dumps(in_dict, separators=(',', ':'))
if isinstance(img_dict, imt.ImageDict):
dict_as_json = json_from_dict(img_dict.dict)
elif isinstance(img_dict, str):
dict_as_json = img_dict
else:
- raise ValueError('input img_dict is not an ImageMetaTag.ImageDict or string')
+ raise ValueError('input img_dict is not an ImageMetaTag.ImageDict or string')
- # file suffix:
- suffix = '.json'
+ # file suffix:
+ suffix = '.json'
if compression:
- suffix += '.zlib'
+ suffix += '.zlib'
- # the output files:
+ # the output files:
out_files = []
tmp_file_dir = os.path.split(file_name_no_ext)[0]
- # use the maximum length of a single string per file:
+ # use the maximum length of a single string per file:
n_chunks = np.ceil(len(dict_as_json) / chunk_char_limit)
n_chunks = int(n_chunks)
if n_chunks == 1:
- # easy if it fits into a single file:
+ # easy if it fits into a single file:
json_file = file_name_no_ext + suffix
if compression:
wrt_str, file_mode = compress_string(dict_as_json)
else:
wrt_str = dict_as_json
- file_mode = 'w'
- with tempfile.NamedTemporaryFile(file_mode, suffix='.json', prefix='imt_',
- dir=tmp_file_dir, delete=False) as file_obj:
+ file_mode = 'w'
+ with tempfile.NamedTemporaryFile(file_mode, suffix='.json', prefix='imt_',
+ dir=tmp_file_dir, delete=False) as file_obj:
file_obj.write(wrt_str)
tmp_file_path = file_obj.name
- # make a note of the outputs:
+ # make a note of the outputs:
out_files.append((tmp_file_path, json_file))
else:
- # find the appropriate depth at which to split the dict:
+ # find the appropriate depth at which to split the dict:
if not isinstance(img_dict, imt.ImageDict):
- msg = 'Large data sets need to be supplied as an ImageDict, so they can be split'
+ msg = 'Large data sets need to be supplied as an ImageDict, so they can be split'
raise ValueError(msg)
- dict_depth = img_dict.dict_depth(uniform_depth=True)
+ dict_depth = img_dict.dict_depth(uniform_depth=True)
if len(img_dict.keys) != dict_depth:
- raise ValueError('Inconsistent depth and keys. Do the keys need relisting?')
+ raise ValueError('Inconsistent depth and keys. Do the keys need relisting?')
- # determine approximately how many splits will be obtained by breaking up the
- # dictionary at each level, assuming the tree structure branches uniformly.
+ # determine approximately how many splits will be obtained by breaking up the
+ # dictionary at each level, assuming the tree structure branches uniformly.
n_by_depth = []
for i_depth in range(dict_depth):
if i_depth == 0:
@@ -729,91 +731,91 @@ Source code for ImageMetaTag.webpage
n_by_depth.append(len(img_dict.keys[i_depth]) * n_by_depth[-1])
if n_by_depth[-1] >= n_chunks:
break
- # i_depth is an index, but depth is the number of levels, so needs one adding:
+ # i_depth is an index, but depth is the number of levels, so needs one adding:
depth = i_depth + 1
- # get all the combinations of keys that reach the required depth:
+ # get all the combinations of keys that reach the required depth:
keys, array_inds = img_dict.dict_index_array(maxdepth=depth)
- # now loop through the array_inds. Each one contains the indices of a valid path through
- # the dict, to the required depth. Each subdict will be written to a separate .json file
+ # now loop through the array_inds. Each one contains the indices of a valid path through
+ # the dict, to the required depth. Each subdict will be written to a separate .json file
paths = []
top_dict = {}
for i_json, path_inds in enumerate(array_inds):
- # traverse to the subdict, given by the current path,
- # storing the keys of the path along the way
+ # traverse to the subdict, given by the current path,
+ # storing the keys of the path along the way
subdict = img_dict.dict
path = []
for level, ind in enumerate(path_inds):
subdict = subdict[keys[level][ind]]
path.append(keys[level][ind])
- # convert the subdict to .json
+ # convert the subdict to .json
subdict_as_json = json_from_dict(subdict)
- # and write this out:
- json_file = '{}_{}{}'.format(file_name_no_ext, i_json, suffix)
+ # and write this out:
+ json_file = '{}_{}{}'.format(file_name_no_ext, i_json, suffix)
if compression:
wrt_str, file_mode = compress_string(subdict_as_json)
else:
wrt_str = dict_as_json
- file_mode = 'w'
- with tempfile.NamedTemporaryFile(file_mode, suffix='.json', prefix='imt_',
- dir=tmp_file_dir, delete=False) as file_obj:
+ file_mode = 'w'
+ with tempfile.NamedTemporaryFile(file_mode, suffix='.json', prefix='imt_',
+ dir=tmp_file_dir, delete=False) as file_obj:
file_obj.write(wrt_str)
tmp_file_path = file_obj.name
- # make a note of the outputs:
+ # make a note of the outputs:
out_files.append((tmp_file_path, json_file))
- # now add this to the top level dict structure, so the final
- # json file can pull them all togther:
- path_dict = {path[-1]: '**FILE_{}**'.format(i_json)}
- # go backwards, from the second from last element of the path, to add more;
+ # now add this to the top level dict structure, so the final
+ # json file can pull them all togther:
+ path_dict = {path[-1]: '**FILE_{}**'.format(i_json)}
+ # go backwards, from the second from last element of the path, to add more;
for key in path[-2::-1]:
path_dict = {key: path_dict}
img_dict.dict_union(top_dict, path_dict)
- # add it to the paths, as a cross reference:
+ # add it to the paths, as a cross reference:
paths.append(path)
- # now create the final json file that combines the previous
- # ones into a single usable object:
+ # now create the final json file that combines the previous
+ # ones into a single usable object:
i_json += 1
- # convert the subdict to .json
+ # convert the subdict to .json
subdict_as_json = json_from_dict(top_dict)
- # and write this out:
- json_file = '{}_{}{}'.format(file_name_no_ext, i_json, suffix)
+ # and write this out:
+ json_file = '{}_{}{}'.format(file_name_no_ext, i_json, suffix)
if compression:
wrt_str, file_mode = compress_string(subdict_as_json)
else:
wrt_str = dict_as_json
- file_mode = 'w'
- with tempfile.NamedTemporaryFile(file_mode, suffix='.json', prefix='imt_',
- dir=tmp_file_dir, delete=False) as file_obj:
+ file_mode = 'w'
+ with tempfile.NamedTemporaryFile(file_mode, suffix='.json', prefix='imt_',
+ dir=tmp_file_dir, delete=False) as file_obj:
file_obj.write(wrt_str)
tmp_file_path = file_obj.name
- # make a note of the outputs:
+ # make a note of the outputs:
out_files.append((tmp_file_path, json_file))
- return out_files
-
+ return out_files
+
def compress_string(in_str):
'''
Compresses a string using zlib to a format that can be read with pako.
Returns both the compressed string and the file mode to use.
'''
if imt.PY3:
- # python3, compress a byte:
- #comp_str = str(zlib.compress(bytearray(in_str, 'utf-8')))
- comp_str = zlib.compress(in_str.encode(encoding='utf=8'))
- file_mode = 'wb'
+ # python3, compress a byte:
+ #comp_str = str(zlib.compress(bytearray(in_str, 'utf-8')))
+ comp_str = zlib.compress(in_str.encode(encoding='utf=8'))
+ file_mode = 'wb'
else:
- # python2, compress a string:
+ # python2, compress a string:
comp_str = zlib.compress(in_str)
- file_mode = 'w'
+ file_mode = 'w'
return comp_str, file_mode
-[docs]def write_js_placeholders(img_dict, file_obj=None, dict_depth=None, selector_prefix=None,
- style='horiz dropdowns', level_names=False,
- show_singleton_selectors=True,
- animated_level=None):
+[docs]def write_js_placeholders(img_dict, file_obj=None, dict_depth=None, selector_prefix=None,
+ style='horiz dropdowns', level_names=False,
+ show_singleton_selectors=True,
+ animated_level=None):
'''
Writes the placeholders into the page body, for the javascript to manipulate
@@ -831,102 +833,102 @@ Source code for ImageMetaTag.webpage
'''
if not show_singleton_selectors:
- # work out which selectors we actually want to show:
+ # work out which selectors we actually want to show:
show_sel = [len(img_dict.keys[x]) > 1 for x in range(dict_depth)]
if not any(show_sel):
- # if there aren't any selectors in this way, that's usually a mistake
- # so show them all:
- show_sel = [True] * dict_depth
+ # if there aren't any selectors in this way, that's usually a mistake
+ # so show them all:
+ show_sel = [True] * dict_depth
else:
- show_sel = [True] * dict_depth
+ show_sel = [True] * dict_depth
sels_shown = sum(show_sel)
- if selector_prefix is None:
+ if selector_prefix is None:
selector_prefix, _junk1, _junk2 = write_js_setup_defaults()
- apply_level_names = False
+ apply_level_names = False
if level_names:
if not isinstance(level_names, list):
- raise ValueError('level_names needs to be a list of length dict_depth')
+ raise ValueError('level_names needs to be a list of length dict_depth')
if len(level_names) != dict_depth:
- raise ValueError('level_names needs to be a list, of length dict_depth')
- apply_level_names = True
+ raise ValueError('level_names needs to be a list, of length dict_depth')
+ apply_level_names = True
else:
- apply_level_names = False
-
- if style == 'horiz dropdowns':
- file_obj.write('''
-<!-- Now for some placeholders for the scripts to put content -->
-<table border=0 cellspacing=0 cellpadding=0 width=99% align=center>
- <tr>
- <td>
- <font size=3>''')
- # a text label for the animator buttons:
+ apply_level_names = False
+
+ if style == 'horiz dropdowns':
+ file_obj.write('''
+<!-- Now for some placeholders for the scripts to put content -->
+<table border=0 cellspacing=0 cellpadding=0 width=99% align=center>
+ <tr>
+ <td>
+ <font size=3>''')
+ # a text label for the animator buttons:
if isinstance(animated_level, str):
- anim_label = '{}: '.format(animated_level)
+ anim_label = '{}: '.format(animated_level)
else:
- anim_label = ''
+ anim_label = ''
- # for each level of depth in the plot dictionary, add a span to hold the selector:
+ # for each level of depth in the plot dictionary, add a span to hold the selector:
if apply_level_names:
- # if we want labelled selectors, then write out
- # a table, with pairs of label, selector, in columns:
- file_obj.write('''
- <table border=0 cellspacing=0 cellpadding=0 style='border-spacing: 3px 0;'>
- <tr>
-''')
+ # if we want labelled selectors, then write out
+ # a table, with pairs of label, selector, in columns:
+ file_obj.write('''
+ <table border=0 cellspacing=0 cellpadding=0 style='border-spacing: 3px 0;'>
+ <tr>
+''')
for level in range(dict_depth):
if show_sel[level]:
- file_obj.write(' <td>{} </td>\n'.format(level_names[level]))
- file_obj.write(''' </tr>
- <tr>
-''')
+ file_obj.write(' <td>{} </td>\n'.format(level_names[level]))
+ file_obj.write(''' </tr>
+ <tr>
+''')
for level in range(dict_depth):
if show_sel[level]:
selp = selector_prefix + str(level)
- out_str = ' <td><span id="{}"> </span></td>\n'.format(selp)
+ out_str = ' <td><span id="{}"> </span></td>\n'.format(selp)
file_obj.write(out_str)
- file_obj.write(''' </tr>
-''')
- # add the placeholder for animators buttons:
- file_obj.write(''' <tr>
- <td colspan={}>
- {}<span id="animator1"> </span>
- <span id="animator2"> </span>
- </td>
- </tr>
- </table>
-'''.format(dict_depth, anim_label))
+ file_obj.write(''' </tr>
+''')
+ # add the placeholder for animators buttons:
+ file_obj.write(''' <tr>
+ <td colspan={}>
+ {}<span id="animator1"> </span>
+ <span id="animator2"> </span>
+ </td>
+ </tr>
+ </table>
+'''.format(dict_depth, anim_label))
else:
- # simply a set of spans, in a line:
+ # simply a set of spans, in a line:
for lev in range(dict_depth):
if show_sel[lev]:
- file_obj.write('''
- <span id="%s%s"> </span>''' % (selector_prefix, lev))
- file_obj.write('\n <br>')
- # add the placeholder for animators buttons:
- file_obj.write('''
- {}<span id="animator1"> </span>
- <span id="animator2"> </span>
- <br>
- '''.format(anim_label))
-
- # now add somewhere for the image to go:
- file_obj.write(''' <div id="the_image">Please wait while the page is loading</div>
- <div id="the_url">....</div>''')
- # and finish off the placeholders:
- file_obj.write('''
- </font>
- </td>
- </tr>
-</table>
-
-''')
+ file_obj.write('''
+ <span id="%s%s"> </span>''' % (selector_prefix, lev))
+ file_obj.write('\n <br>')
+ # add the placeholder for animators buttons:
+ file_obj.write('''
+ {}<span id="animator1"> </span>
+ <span id="animator2"> </span>
+ <br>
+ '''.format(anim_label))
+
+ # now add somewhere for the image to go:
+ file_obj.write(''' <div id="the_image">Please wait while the page is loading</div>
+ <div id="the_url">....</div>''')
+ # and finish off the placeholders:
+ file_obj.write('''
+ </font>
+ </td>
+ </tr>
+</table>
+
+''')
else:
- raise ValueError('"%s" tyle of content placeholder not defined' % style)
-
-[docs]def copy_required_javascript(file_dir, style, compression=False, overwrite=True):
+ raise ValueError('"%s" tyle of content placeholder not defined' % style)
+
+[docs]def copy_required_javascript(file_dir, style, compression=False, overwrite=True):
'''
Copies the required javascript library to the directory
containing the required page (file_dir) for a given webpage style.
@@ -938,126 +940,126 @@ Source code for ImageMetaTag.webpage
with zlib, if compression=True.
'''
- if style == 'horiz dropdowns':
- imt_js_to_copy = 'imt_dropdown.js'
- # get this from the installed ImageMetaTag directory:
- file_src_dir = os.path.join(imt.__path__[0], 'javascript')
- first_line = '// ImageMetaTag dropdown menu scripting - vn{}\n'.format(imt.__version__)
+ if style == 'horiz dropdowns':
+ imt_js_to_copy = 'imt_dropdown.js'
+ # get this from the installed ImageMetaTag directory:
+ file_src_dir = os.path.join(imt.__path__[0], 'javascript')
+ first_line = '// ImageMetaTag dropdown menu scripting - vn{}\n'.format(imt.__version__)
else:
- raise ValueError('Javascript library not set up for style: {}'.format(style))
+ raise ValueError('Javascript library not set up for style: {}'.format(style))
if not os.path.isfile(os.path.join(file_dir, imt_js_to_copy)):
- # file isn't in target dir, so copy it:
+ # file isn't in target dir, so copy it:
shutil.copy(os.path.join(file_src_dir, imt_js_to_copy),
os.path.join(file_dir, imt_js_to_copy))
else:
- # the file is there, check it's right:
+ # the file is there, check it's right:
with open(os.path.join(file_dir, imt_js_to_copy)) as file_obj:
this_first_line = file_obj.readline()
if first_line == this_first_line:
- # the file is good, move on:
+ # the file is good, move on:
pass
else:
if overwrite:
shutil.copy(os.path.join(file_src_dir, imt_js_to_copy),
os.path.join(file_dir, imt_js_to_copy))
else:
- print('''File: {}/{} differs to the expected contents, but is
-not being overwritten. Your webpage may be broken!'''.format(file_dir, imt_js_to_copy))
+ print('''File: {}/{} differs to the expected contents, but is
+not being overwritten. Your webpage may be broken!'''.format(file_dir, imt_js_to_copy))
- # make a list of all the required javascript files
+ # make a list of all the required javascript files
js_files = [imt_js_to_copy]
- # now move on to javascript dependencies from the compression:
+ # now move on to javascript dependencies from the compression:
if compression:
js_to_copy = PAKO_JS_FILE
js_src = os.path.join(file_src_dir, js_to_copy)
js_dest = os.path.join(file_dir, js_to_copy)
- # if the file is already at destination, we're good:
+ # if the file is already at destination, we're good:
if os.path.isfile(js_dest):
pass
else:
- # is the required file in the javascript source directory:
+ # is the required file in the javascript source directory:
if not os.path.isfile(js_src):
- # we need to get the required javascript from source.
- #
- # if we have permission to write to teh file_src_dir then
- # try to do so. This means it's installed for all uses from this
- # install of ImageMetaTag:
+ # we need to get the required javascript from source.
+ #
+ # if we have permission to write to teh file_src_dir then
+ # try to do so. This means it's installed for all uses from this
+ # install of ImageMetaTag:
if os.access(file_src_dir, os.W_OK):
pako_to_dir = file_src_dir
- # now get pako:
+ # now get pako:
get_pako(pako_to_dir=pako_to_dir)
- # and copy it to where it's needed for this call:
+ # and copy it to where it's needed for this call:
shutil.copy(js_src, js_dest)
else:
- # put pako js file into the target dir. At least it will
- # be available for subsequent writes to that dir:
+ # put pako js file into the target dir. At least it will
+ # be available for subsequent writes to that dir:
pako_to_dir = file_dir
- # now get pako to that dir:
+ # now get pako to that dir:
get_pako(pako_to_dir=pako_to_dir)
else:
- # copy the file:
+ # copy the file:
shutil.copy(js_src, js_dest)
- # finally, make a note:
+ # finally, make a note:
js_files.append(js_to_copy)
- return js_files
-
-def get_pako(pako_to_dir=None):
+ return js_files
+
+def get_pako(pako_to_dir=None):
'''
Obtains the required pako javascript code from remote host, to a given
javascript directory. If the javascript dir is not supplied, then
the 'javascript' directory alongside the ImageMetaTag python code is used.
'''
import tarfile
- from urllib.request import urlopen
+ from urllib.request import urlopen
- # set up pako into the current imt_dir:
- if pako_to_dir is None:
- pako_to_dir = os.path.join(imt.__path__[0], 'javascript')
+ # set up pako into the current imt_dir:
+ if pako_to_dir is None:
+ pako_to_dir = os.path.join(imt.__path__[0], 'javascript')
- # Open the url
+ # Open the url
pako_urlopen = urlopen(PAKO_SOURE_TAR)
- print("downloading " + PAKO_SOURE_TAR)
- # Open our local file for writing
- with tempfile.NamedTemporaryFile('w', suffix='.tar.gz', prefix='pako_',
- delete=False) as local_file:
+ print("downloading " + PAKO_SOURE_TAR)
+ # Open our local file for writing
+ with tempfile.NamedTemporaryFile('w', suffix='.tar.gz', prefix='pako_',
+ delete=False) as local_file:
local_file.write(pako_urlopen.read())
targz_file = local_file.name
pako_urlopen.close()
- # now extract the file we need:
- with tarfile.open(name=targz_file, mode='r:gz') as tgz:
+ # now extract the file we need:
+ with tarfile.open(name=targz_file, mode='r:gz') as tgz:
if not tarfile.is_tarfile:
- raise ValueError('Downloaded pako tar.gz file cannot be read.')
+ raise ValueError('Downloaded pako tar.gz file cannot be read.')
else:
- target = 'pako-{}/dist/{}'.format(PAKO_RELEASE, PAKO_JS_FILE)
+ target = 'pako-{}/dist/{}'.format(PAKO_RELEASE, PAKO_JS_FILE)
target_file = tgz.extractfile(target)
if target_file:
- with open(os.path.join(pako_to_dir, PAKO_JS_FILE), 'w') as final_file:
+ with open(os.path.join(pako_to_dir, PAKO_JS_FILE), 'w') as final_file:
for line in target_file:
final_file.write(line)
os.remove(targz_file)
def _indent_up_one(ind):
- 'increases the indent level of an input ind by one'
+ 'increases the indent level of an input ind by one'
n_indents = int(len(ind) / LEN_INDENT)
return INDENT * (n_indents + 1)
def _indent_down_one(ind):
- 'decreases the indent level of an input ind by one'
+ 'decreases the indent level of an input ind by one'
n_indents = int(len(ind) / LEN_INDENT)
return INDENT * max(n_indents - 1, 0)
def _py_to_js_bool(py_bool):
- 'Converts a python boolean to a string, in javascript bool format (all lower case)'
- if py_bool is True:
- return 'true'
- elif py_bool is False:
- return 'false'
+ 'Converts a python boolean to a string, in javascript bool format (all lower case)'
+ if py_bool is True:
+ return 'true'
+ elif py_bool is False:
+ return 'false'
else:
- raise ValueError('input to _py_to_js_bool is not a boolean, it is: %s' % py_bool)
+ raise ValueError('input to _py_to_js_bool is not a boolean, it is: %s' % py_bool)
@@ -1065,7 +1067,7 @@ Source code for ImageMetaTag.webpage
-
+
Navigation
-
@@ -1074,13 +1076,14 @@
Navigation
-
modules |
- - ImageMetaTag 0.7.3 documentation »
- - Module code »
+ - ImageMetaTag 0.7.3 documentation »
+ - Module code »
+ - ImageMetaTag »
-
- © Copyright 2015-2018, British Crown Copyright.
- Created using Sphinx 1.4.8.
+
+ © Copyright 2015-2018, British Crown Copyright.
+ Created using Sphinx 1.2.2.
\ No newline at end of file
diff --git a/docs/build/html/_modules/index.html b/docs/build/html/_modules/index.html
index 14fe754..7de5c80 100644
--- a/docs/build/html/_modules/index.html
+++ b/docs/build/html/_modules/index.html
@@ -6,7 +6,7 @@
- Overview: module code — ImageMetaTag 0.7.3 documentation
+ Overview: module code — ImageMetaTag 0.7.3 documentation
@@ -23,12 +23,10 @@
-
-
-
-
+
+
Navigation
-
@@ -37,22 +35,25 @@
Navigation
-
modules |
- - ImageMetaTag 0.7.3 documentation »
+ - ImageMetaTag 0.7.3 documentation »
-
+
-
+
Quick search
+
+ Enter search terms or a module, class or function name.
+
@@ -61,21 +62,20 @@ Quick search
-
+
All modules for which code is available
+
-
+
Navigation
-
@@ -84,12 +84,12 @@
Navigation
-
modules |
- - ImageMetaTag 0.7.3 documentation »
+ - ImageMetaTag 0.7.3 documentation »
-
- © Copyright 2015-2018, British Crown Copyright.
- Created using Sphinx 1.4.8.
+
+ © Copyright 2015-2018, British Crown Copyright.
+ Created using Sphinx 1.2.2.
\ No newline at end of file
diff --git a/docs/build/html/_static/basic.css b/docs/build/html/_static/basic.css
index 0b79414..967e36c 100644
--- a/docs/build/html/_static/basic.css
+++ b/docs/build/html/_static/basic.css
@@ -4,7 +4,7 @@
*
* Sphinx stylesheet -- basic theme.
*
- * :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
+ * :copyright: Copyright 2007-2014 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@@ -52,8 +52,6 @@ div.sphinxsidebar {
width: 230px;
margin-left: -100%;
font-size: 90%;
- word-wrap: break-word;
- overflow-wrap : break-word;
}
div.sphinxsidebar ul {
@@ -85,6 +83,10 @@ div.sphinxsidebar #searchbox input[type="text"] {
width: 170px;
}
+div.sphinxsidebar #searchbox input[type="submit"] {
+ width: 30px;
+}
+
img {
border: 0;
max-width: 100%;
@@ -185,13 +187,6 @@ div.genindex-jumpbox {
/* -- general body styles --------------------------------------------------- */
-div.body p, div.body dd, div.body li, div.body blockquote {
- -moz-hyphens: auto;
- -ms-hyphens: auto;
- -webkit-hyphens: auto;
- hyphens: auto;
-}
-
a.headerlink {
visibility: hidden;
}
@@ -202,10 +197,7 @@ h3:hover > a.headerlink,
h4:hover > a.headerlink,
h5:hover > a.headerlink,
h6:hover > a.headerlink,
-dt:hover > a.headerlink,
-caption:hover > a.headerlink,
-p.caption:hover > a.headerlink,
-div.code-block-caption:hover > a.headerlink {
+dt:hover > a.headerlink {
visibility: visible;
}
@@ -322,13 +314,6 @@ table.docutils {
border-collapse: collapse;
}
-table caption span.caption-number {
- font-style: italic;
-}
-
-table caption span.caption-text {
-}
-
table.docutils td, table.docutils th {
padding: 1px 8px 1px 5px;
border-top: 0;
@@ -359,25 +344,6 @@ table.citation td {
border-bottom: none;
}
-/* -- figures --------------------------------------------------------------- */
-
-div.figure {
- margin: 0.5em;
- padding: 0.5em;
-}
-
-div.figure p.caption {
- padding: 0.3em;
-}
-
-div.figure p.caption span.caption-number {
- font-style: italic;
-}
-
-div.figure p.caption span.caption-text {
-}
-
-
/* -- other body styles ----------------------------------------------------- */
ol.arabic {
@@ -440,10 +406,6 @@ dl.glossary dt {
font-size: 1.3em;
}
-.sig-paren {
- font-size: larger;
-}
-
.versionmodified {
font-style: italic;
}
@@ -494,13 +456,6 @@ pre {
overflow-y: hidden; /* fixes display issues on Chrome browsers */
}
-span.pre {
- -moz-hyphens: none;
- -ms-hyphens: none;
- -webkit-hyphens: none;
- hyphens: none;
-}
-
td.linenos pre {
padding: 5px 0px;
border: 0;
@@ -516,51 +471,22 @@ table.highlighttable td {
padding: 0 0.5em 0 0.5em;
}
-div.code-block-caption {
- padding: 2px 5px;
- font-size: small;
-}
-
-div.code-block-caption code {
- background-color: transparent;
-}
-
-div.code-block-caption + div > div.highlight > pre {
- margin-top: 0;
-}
-
-div.code-block-caption span.caption-number {
- padding: 0.1em 0.3em;
- font-style: italic;
-}
-
-div.code-block-caption span.caption-text {
-}
-
-div.literal-block-wrapper {
- padding: 1em 1em 0;
-}
-
-div.literal-block-wrapper div.highlight {
- margin: 0;
-}
-
-code.descname {
+tt.descname {
background-color: transparent;
font-weight: bold;
font-size: 1.2em;
}
-code.descclassname {
+tt.descclassname {
background-color: transparent;
}
-code.xref, a code {
+tt.xref, a tt {
background-color: transparent;
font-weight: bold;
}
-h1 code, h2 code, h3 code, h4 code, h5 code, h6 code {
+h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
background-color: transparent;
}
diff --git a/docs/build/html/_static/doctools.js b/docs/build/html/_static/doctools.js
index 8163495..c5455c9 100644
--- a/docs/build/html/_static/doctools.js
+++ b/docs/build/html/_static/doctools.js
@@ -4,7 +4,7 @@
*
* Sphinx JavaScript utilities for all documentation.
*
- * :copyright: Copyright 2007-2016 by the Sphinx team, see AUTHORS.
+ * :copyright: Copyright 2007-2014 by the Sphinx team, see AUTHORS.
* :license: BSD, see LICENSE for details.
*
*/
@@ -91,30 +91,6 @@ jQuery.fn.highlightText = function(text, className) {
});
};
-/*
- * backward compatibility for jQuery.browser
- * This will be supported until firefox bug is fixed.
- */
-if (!jQuery.browser) {
- jQuery.uaMatch = function(ua) {
- ua = ua.toLowerCase();
-
- var match = /(chrome)[ \/]([\w.]+)/.exec(ua) ||
- /(webkit)[ \/]([\w.]+)/.exec(ua) ||
- /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) ||
- /(msie) ([\w.]+)/.exec(ua) ||
- ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) ||
- [];
-
- return {
- browser: match[ 1 ] || "",
- version: match[ 2 ] || "0"
- };
- };
- jQuery.browser = {};
- jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true;
-}
-
/**
* Small JavaScript module for the documentation.
*/
@@ -124,7 +100,6 @@ var Documentation = {
this.fixFirefoxAnchorBug();
this.highlightSearchWords();
this.initIndexTable();
-
},
/**
@@ -177,10 +152,9 @@ var Documentation = {
/**
* workaround a firefox stupidity
- * see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075
*/
fixFirefoxAnchorBug : function() {
- if (document.location.hash)
+ if (document.location.hash && $.browser.mozilla)
window.setTimeout(function() {
document.location.href += '';
}, 10);
@@ -253,29 +227,6 @@ var Documentation = {
});
var url = parts.join('/');
return path.substring(url.lastIndexOf('/') + 1, path.length - 1);
- },
-
- initOnKeyListeners: function() {
- $(document).keyup(function(event) {
- var activeElementType = document.activeElement.tagName;
- // don't navigate when in search box or textarea
- if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT') {
- switch (event.keyCode) {
- case 37: // left
- var prevHref = $('link[rel="prev"]').prop('href');
- if (prevHref) {
- window.location.href = prevHref;
- return false;
- }
- case 39: // right
- var nextHref = $('link[rel="next"]').prop('href');
- if (nextHref) {
- window.location.href = nextHref;
- return false;
- }
- }
- }
- });
}
};
@@ -284,4 +235,4 @@ _ = Documentation.gettext;
$(document).ready(function() {
Documentation.init();
-});
\ No newline at end of file
+});
diff --git a/docs/build/html/_static/down-pressed.png b/docs/build/html/_static/down-pressed.png
index 7c30d00..6f7ad78 100644
Binary files a/docs/build/html/_static/down-pressed.png and b/docs/build/html/_static/down-pressed.png differ
diff --git a/docs/build/html/_static/down.png b/docs/build/html/_static/down.png
index f48098a..3003a88 100644
Binary files a/docs/build/html/_static/down.png and b/docs/build/html/_static/down.png differ
diff --git a/docs/build/html/_static/file.png b/docs/build/html/_static/file.png
index 254c60b..d18082e 100644
Binary files a/docs/build/html/_static/file.png and b/docs/build/html/_static/file.png differ
diff --git a/docs/build/html/_static/jquery.js b/docs/build/html/_static/jquery.js
index ab28a24..83589da 100644
--- a/docs/build/html/_static/jquery.js
+++ b/docs/build/html/_static/jquery.js
@@ -1,4 +1,2 @@
-/*! jQuery v1.11.1 | (c) 2005, 2014 jQuery Foundation, Inc. | jquery.org/license */
-!function(a,b){"object"==typeof module&&"object"==typeof module.exports?module.exports=a.document?b(a,!0):function(a){if(!a.document)throw new Error("jQuery requires a window with a document");return b(a)}:b(a)}("undefined"!=typeof window?window:this,function(a,b){var c=[],d=c.slice,e=c.concat,f=c.push,g=c.indexOf,h={},i=h.toString,j=h.hasOwnProperty,k={},l="1.11.1",m=function(a,b){return new m.fn.init(a,b)},n=/^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g,o=/^-ms-/,p=/-([\da-z])/gi,q=function(a,b){return b.toUpperCase()};m.fn=m.prototype={jquery:l,constructor:m,selector:"",length:0,toArray:function(){return d.call(this)},get:function(a){return null!=a?0>a?this[a+this.length]:this[a]:d.call(this)},pushStack:function(a){var b=m.merge(this.constructor(),a);return b.prevObject=this,b.context=this.context,b},each:function(a,b){return m.each(this,a,b)},map:function(a){return this.pushStack(m.map(this,function(b,c){return a.call(b,c,b)}))},slice:function(){return this.pushStack(d.apply(this,arguments))},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},eq:function(a){var b=this.length,c=+a+(0>a?b:0);return this.pushStack(c>=0&&b>c?[this[c]]:[])},end:function(){return this.prevObject||this.constructor(null)},push:f,sort:c.sort,splice:c.splice},m.extend=m.fn.extend=function(){var a,b,c,d,e,f,g=arguments[0]||{},h=1,i=arguments.length,j=!1;for("boolean"==typeof g&&(j=g,g=arguments[h]||{},h++),"object"==typeof g||m.isFunction(g)||(g={}),h===i&&(g=this,h--);i>h;h++)if(null!=(e=arguments[h]))for(d in e)a=g[d],c=e[d],g!==c&&(j&&c&&(m.isPlainObject(c)||(b=m.isArray(c)))?(b?(b=!1,f=a&&m.isArray(a)?a:[]):f=a&&m.isPlainObject(a)?a:{},g[d]=m.extend(j,f,c)):void 0!==c&&(g[d]=c));return g},m.extend({expando:"jQuery"+(l+Math.random()).replace(/\D/g,""),isReady:!0,error:function(a){throw new Error(a)},noop:function(){},isFunction:function(a){return"function"===m.type(a)},isArray:Array.isArray||function(a){return"array"===m.type(a)},isWindow:function(a){return null!=a&&a==a.window},isNumeric:function(a){return!m.isArray(a)&&a-parseFloat(a)>=0},isEmptyObject:function(a){var b;for(b in a)return!1;return!0},isPlainObject:function(a){var b;if(!a||"object"!==m.type(a)||a.nodeType||m.isWindow(a))return!1;try{if(a.constructor&&!j.call(a,"constructor")&&!j.call(a.constructor.prototype,"isPrototypeOf"))return!1}catch(c){return!1}if(k.ownLast)for(b in a)return j.call(a,b);for(b in a);return void 0===b||j.call(a,b)},type:function(a){return null==a?a+"":"object"==typeof a||"function"==typeof a?h[i.call(a)]||"object":typeof a},globalEval:function(b){b&&m.trim(b)&&(a.execScript||function(b){a.eval.call(a,b)})(b)},camelCase:function(a){return a.replace(o,"ms-").replace(p,q)},nodeName:function(a,b){return a.nodeName&&a.nodeName.toLowerCase()===b.toLowerCase()},each:function(a,b,c){var d,e=0,f=a.length,g=r(a);if(c){if(g){for(;f>e;e++)if(d=b.apply(a[e],c),d===!1)break}else for(e in a)if(d=b.apply(a[e],c),d===!1)break}else if(g){for(;f>e;e++)if(d=b.call(a[e],e,a[e]),d===!1)break}else for(e in a)if(d=b.call(a[e],e,a[e]),d===!1)break;return a},trim:function(a){return null==a?"":(a+"").replace(n,"")},makeArray:function(a,b){var c=b||[];return null!=a&&(r(Object(a))?m.merge(c,"string"==typeof a?[a]:a):f.call(c,a)),c},inArray:function(a,b,c){var d;if(b){if(g)return g.call(b,a,c);for(d=b.length,c=c?0>c?Math.max(0,d+c):c:0;d>c;c++)if(c in b&&b[c]===a)return c}return-1},merge:function(a,b){var c=+b.length,d=0,e=a.length;while(c>d)a[e++]=b[d++];if(c!==c)while(void 0!==b[d])a[e++]=b[d++];return a.length=e,a},grep:function(a,b,c){for(var d,e=[],f=0,g=a.length,h=!c;g>f;f++)d=!b(a[f],f),d!==h&&e.push(a[f]);return e},map:function(a,b,c){var d,f=0,g=a.length,h=r(a),i=[];if(h)for(;g>f;f++)d=b(a[f],f,c),null!=d&&i.push(d);else for(f in a)d=b(a[f],f,c),null!=d&&i.push(d);return e.apply([],i)},guid:1,proxy:function(a,b){var c,e,f;return"string"==typeof b&&(f=a[b],b=a,a=f),m.isFunction(a)?(c=d.call(arguments,2),e=function(){return a.apply(b||this,c.concat(d.call(arguments)))},e.guid=a.guid=a.guid||m.guid++,e):void 0},now:function(){return+new Date},support:k}),m.each("Boolean Number String Function Array Date RegExp Object Error".split(" "),function(a,b){h["[object "+b+"]"]=b.toLowerCase()});function r(a){var b=a.length,c=m.type(a);return"function"===c||m.isWindow(a)?!1:1===a.nodeType&&b?!0:"array"===c||0===b||"number"==typeof b&&b>0&&b-1 in a}var s=function(a){var b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u="sizzle"+-new Date,v=a.document,w=0,x=0,y=gb(),z=gb(),A=gb(),B=function(a,b){return a===b&&(l=!0),0},C="undefined",D=1<<31,E={}.hasOwnProperty,F=[],G=F.pop,H=F.push,I=F.push,J=F.slice,K=F.indexOf||function(a){for(var b=0,c=this.length;c>b;b++)if(this[b]===a)return b;return-1},L="checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped",M="[\\x20\\t\\r\\n\\f]",N="(?:\\\\.|[\\w-]|[^\\x00-\\xa0])+",O=N.replace("w","w#"),P="\\["+M+"*("+N+")(?:"+M+"*([*^$|!~]?=)"+M+"*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|("+O+"))|)"+M+"*\\]",Q=":("+N+")(?:\\((('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|((?:\\\\.|[^\\\\()[\\]]|"+P+")*)|.*)\\)|)",R=new RegExp("^"+M+"+|((?:^|[^\\\\])(?:\\\\.)*)"+M+"+$","g"),S=new RegExp("^"+M+"*,"+M+"*"),T=new RegExp("^"+M+"*([>+~]|"+M+")"+M+"*"),U=new RegExp("="+M+"*([^\\]'\"]*?)"+M+"*\\]","g"),V=new RegExp(Q),W=new RegExp("^"+O+"$"),X={ID:new RegExp("^#("+N+")"),CLASS:new RegExp("^\\.("+N+")"),TAG:new RegExp("^("+N.replace("w","w*")+")"),ATTR:new RegExp("^"+P),PSEUDO:new RegExp("^"+Q),CHILD:new RegExp("^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\("+M+"*(even|odd|(([+-]|)(\\d*)n|)"+M+"*(?:([+-]|)"+M+"*(\\d+)|))"+M+"*\\)|)","i"),bool:new RegExp("^(?:"+L+")$","i"),needsContext:new RegExp("^"+M+"*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\("+M+"*((?:-\\d)?\\d*)"+M+"*\\)|)(?=[^-]|$)","i")},Y=/^(?:input|select|textarea|button)$/i,Z=/^h\d$/i,$=/^[^{]+\{\s*\[native \w/,_=/^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/,ab=/[+~]/,bb=/'|\\/g,cb=new RegExp("\\\\([\\da-f]{1,6}"+M+"?|("+M+")|.)","ig"),db=function(a,b,c){var d="0x"+b-65536;return d!==d||c?b:0>d?String.fromCharCode(d+65536):String.fromCharCode(d>>10|55296,1023&d|56320)};try{I.apply(F=J.call(v.childNodes),v.childNodes),F[v.childNodes.length].nodeType}catch(eb){I={apply:F.length?function(a,b){H.apply(a,J.call(b))}:function(a,b){var c=a.length,d=0;while(a[c++]=b[d++]);a.length=c-1}}}function fb(a,b,d,e){var f,h,j,k,l,o,r,s,w,x;if((b?b.ownerDocument||b:v)!==n&&m(b),b=b||n,d=d||[],!a||"string"!=typeof a)return d;if(1!==(k=b.nodeType)&&9!==k)return[];if(p&&!e){if(f=_.exec(a))if(j=f[1]){if(9===k){if(h=b.getElementById(j),!h||!h.parentNode)return d;if(h.id===j)return d.push(h),d}else if(b.ownerDocument&&(h=b.ownerDocument.getElementById(j))&&t(b,h)&&h.id===j)return d.push(h),d}else{if(f[2])return I.apply(d,b.getElementsByTagName(a)),d;if((j=f[3])&&c.getElementsByClassName&&b.getElementsByClassName)return I.apply(d,b.getElementsByClassName(j)),d}if(c.qsa&&(!q||!q.test(a))){if(s=r=u,w=b,x=9===k&&a,1===k&&"object"!==b.nodeName.toLowerCase()){o=g(a),(r=b.getAttribute("id"))?s=r.replace(bb,"\\$&"):b.setAttribute("id",s),s="[id='"+s+"'] ",l=o.length;while(l--)o[l]=s+qb(o[l]);w=ab.test(a)&&ob(b.parentNode)||b,x=o.join(",")}if(x)try{return I.apply(d,w.querySelectorAll(x)),d}catch(y){}finally{r||b.removeAttribute("id")}}}return i(a.replace(R,"$1"),b,d,e)}function gb(){var a=[];function b(c,e){return a.push(c+" ")>d.cacheLength&&delete b[a.shift()],b[c+" "]=e}return b}function hb(a){return a[u]=!0,a}function ib(a){var b=n.createElement("div");try{return!!a(b)}catch(c){return!1}finally{b.parentNode&&b.parentNode.removeChild(b),b=null}}function jb(a,b){var c=a.split("|"),e=a.length;while(e--)d.attrHandle[c[e]]=b}function kb(a,b){var c=b&&a,d=c&&1===a.nodeType&&1===b.nodeType&&(~b.sourceIndex||D)-(~a.sourceIndex||D);if(d)return d;if(c)while(c=c.nextSibling)if(c===b)return-1;return a?1:-1}function lb(a){return function(b){var c=b.nodeName.toLowerCase();return"input"===c&&b.type===a}}function mb(a){return function(b){var c=b.nodeName.toLowerCase();return("input"===c||"button"===c)&&b.type===a}}function nb(a){return hb(function(b){return b=+b,hb(function(c,d){var e,f=a([],c.length,b),g=f.length;while(g--)c[e=f[g]]&&(c[e]=!(d[e]=c[e]))})})}function ob(a){return a&&typeof a.getElementsByTagName!==C&&a}c=fb.support={},f=fb.isXML=function(a){var b=a&&(a.ownerDocument||a).documentElement;return b?"HTML"!==b.nodeName:!1},m=fb.setDocument=function(a){var b,e=a?a.ownerDocument||a:v,g=e.defaultView;return e!==n&&9===e.nodeType&&e.documentElement?(n=e,o=e.documentElement,p=!f(e),g&&g!==g.top&&(g.addEventListener?g.addEventListener("unload",function(){m()},!1):g.attachEvent&&g.attachEvent("onunload",function(){m()})),c.attributes=ib(function(a){return a.className="i",!a.getAttribute("className")}),c.getElementsByTagName=ib(function(a){return a.appendChild(e.createComment("")),!a.getElementsByTagName("*").length}),c.getElementsByClassName=$.test(e.getElementsByClassName)&&ib(function(a){return a.innerHTML="",a.firstChild.className="i",2===a.getElementsByClassName("i").length}),c.getById=ib(function(a){return o.appendChild(a).id=u,!e.getElementsByName||!e.getElementsByName(u).length}),c.getById?(d.find.ID=function(a,b){if(typeof b.getElementById!==C&&p){var c=b.getElementById(a);return c&&c.parentNode?[c]:[]}},d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){return a.getAttribute("id")===b}}):(delete d.find.ID,d.filter.ID=function(a){var b=a.replace(cb,db);return function(a){var c=typeof a.getAttributeNode!==C&&a.getAttributeNode("id");return c&&c.value===b}}),d.find.TAG=c.getElementsByTagName?function(a,b){return typeof b.getElementsByTagName!==C?b.getElementsByTagName(a):void 0}:function(a,b){var c,d=[],e=0,f=b.getElementsByTagName(a);if("*"===a){while(c=f[e++])1===c.nodeType&&d.push(c);return d}return f},d.find.CLASS=c.getElementsByClassName&&function(a,b){return typeof b.getElementsByClassName!==C&&p?b.getElementsByClassName(a):void 0},r=[],q=[],(c.qsa=$.test(e.querySelectorAll))&&(ib(function(a){a.innerHTML="",a.querySelectorAll("[msallowclip^='']").length&&q.push("[*^$]="+M+"*(?:''|\"\")"),a.querySelectorAll("[selected]").length||q.push("\\["+M+"*(?:value|"+L+")"),a.querySelectorAll(":checked").length||q.push(":checked")}),ib(function(a){var b=e.createElement("input");b.setAttribute("type","hidden"),a.appendChild(b).setAttribute("name","D"),a.querySelectorAll("[name=d]").length&&q.push("name"+M+"*[*^$|!~]?="),a.querySelectorAll(":enabled").length||q.push(":enabled",":disabled"),a.querySelectorAll("*,:x"),q.push(",.*:")})),(c.matchesSelector=$.test(s=o.matches||o.webkitMatchesSelector||o.mozMatchesSelector||o.oMatchesSelector||o.msMatchesSelector))&&ib(function(a){c.disconnectedMatch=s.call(a,"div"),s.call(a,"[s!='']:x"),r.push("!=",Q)}),q=q.length&&new RegExp(q.join("|")),r=r.length&&new RegExp(r.join("|")),b=$.test(o.compareDocumentPosition),t=b||$.test(o.contains)?function(a,b){var c=9===a.nodeType?a.documentElement:a,d=b&&b.parentNode;return a===d||!(!d||1!==d.nodeType||!(c.contains?c.contains(d):a.compareDocumentPosition&&16&a.compareDocumentPosition(d)))}:function(a,b){if(b)while(b=b.parentNode)if(b===a)return!0;return!1},B=b?function(a,b){if(a===b)return l=!0,0;var d=!a.compareDocumentPosition-!b.compareDocumentPosition;return d?d:(d=(a.ownerDocument||a)===(b.ownerDocument||b)?a.compareDocumentPosition(b):1,1&d||!c.sortDetached&&b.compareDocumentPosition(a)===d?a===e||a.ownerDocument===v&&t(v,a)?-1:b===e||b.ownerDocument===v&&t(v,b)?1:k?K.call(k,a)-K.call(k,b):0:4&d?-1:1)}:function(a,b){if(a===b)return l=!0,0;var c,d=0,f=a.parentNode,g=b.parentNode,h=[a],i=[b];if(!f||!g)return a===e?-1:b===e?1:f?-1:g?1:k?K.call(k,a)-K.call(k,b):0;if(f===g)return kb(a,b);c=a;while(c=c.parentNode)h.unshift(c);c=b;while(c=c.parentNode)i.unshift(c);while(h[d]===i[d])d++;return d?kb(h[d],i[d]):h[d]===v?-1:i[d]===v?1:0},e):n},fb.matches=function(a,b){return fb(a,null,null,b)},fb.matchesSelector=function(a,b){if((a.ownerDocument||a)!==n&&m(a),b=b.replace(U,"='$1']"),!(!c.matchesSelector||!p||r&&r.test(b)||q&&q.test(b)))try{var d=s.call(a,b);if(d||c.disconnectedMatch||a.document&&11!==a.document.nodeType)return d}catch(e){}return fb(b,n,null,[a]).length>0},fb.contains=function(a,b){return(a.ownerDocument||a)!==n&&m(a),t(a,b)},fb.attr=function(a,b){(a.ownerDocument||a)!==n&&m(a);var e=d.attrHandle[b.toLowerCase()],f=e&&E.call(d.attrHandle,b.toLowerCase())?e(a,b,!p):void 0;return void 0!==f?f:c.attributes||!p?a.getAttribute(b):(f=a.getAttributeNode(b))&&f.specified?f.value:null},fb.error=function(a){throw new Error("Syntax error, unrecognized expression: "+a)},fb.uniqueSort=function(a){var b,d=[],e=0,f=0;if(l=!c.detectDuplicates,k=!c.sortStable&&a.slice(0),a.sort(B),l){while(b=a[f++])b===a[f]&&(e=d.push(f));while(e--)a.splice(d[e],1)}return k=null,a},e=fb.getText=function(a){var b,c="",d=0,f=a.nodeType;if(f){if(1===f||9===f||11===f){if("string"==typeof a.textContent)return a.textContent;for(a=a.firstChild;a;a=a.nextSibling)c+=e(a)}else if(3===f||4===f)return a.nodeValue}else while(b=a[d++])c+=e(b);return c},d=fb.selectors={cacheLength:50,createPseudo:hb,match:X,attrHandle:{},find:{},relative:{">":{dir:"parentNode",first:!0}," ":{dir:"parentNode"},"+":{dir:"previousSibling",first:!0},"~":{dir:"previousSibling"}},preFilter:{ATTR:function(a){return a[1]=a[1].replace(cb,db),a[3]=(a[3]||a[4]||a[5]||"").replace(cb,db),"~="===a[2]&&(a[3]=" "+a[3]+" "),a.slice(0,4)},CHILD:function(a){return a[1]=a[1].toLowerCase(),"nth"===a[1].slice(0,3)?(a[3]||fb.error(a[0]),a[4]=+(a[4]?a[5]+(a[6]||1):2*("even"===a[3]||"odd"===a[3])),a[5]=+(a[7]+a[8]||"odd"===a[3])):a[3]&&fb.error(a[0]),a},PSEUDO:function(a){var b,c=!a[6]&&a[2];return X.CHILD.test(a[0])?null:(a[3]?a[2]=a[4]||a[5]||"":c&&V.test(c)&&(b=g(c,!0))&&(b=c.indexOf(")",c.length-b)-c.length)&&(a[0]=a[0].slice(0,b),a[2]=c.slice(0,b)),a.slice(0,3))}},filter:{TAG:function(a){var b=a.replace(cb,db).toLowerCase();return"*"===a?function(){return!0}:function(a){return a.nodeName&&a.nodeName.toLowerCase()===b}},CLASS:function(a){var b=y[a+" "];return b||(b=new RegExp("(^|"+M+")"+a+"("+M+"|$)"))&&y(a,function(a){return b.test("string"==typeof a.className&&a.className||typeof a.getAttribute!==C&&a.getAttribute("class")||"")})},ATTR:function(a,b,c){return function(d){var e=fb.attr(d,a);return null==e?"!="===b:b?(e+="","="===b?e===c:"!="===b?e!==c:"^="===b?c&&0===e.indexOf(c):"*="===b?c&&e.indexOf(c)>-1:"$="===b?c&&e.slice(-c.length)===c:"~="===b?(" "+e+" ").indexOf(c)>-1:"|="===b?e===c||e.slice(0,c.length+1)===c+"-":!1):!0}},CHILD:function(a,b,c,d,e){var f="nth"!==a.slice(0,3),g="last"!==a.slice(-4),h="of-type"===b;return 1===d&&0===e?function(a){return!!a.parentNode}:function(b,c,i){var j,k,l,m,n,o,p=f!==g?"nextSibling":"previousSibling",q=b.parentNode,r=h&&b.nodeName.toLowerCase(),s=!i&&!h;if(q){if(f){while(p){l=b;while(l=l[p])if(h?l.nodeName.toLowerCase()===r:1===l.nodeType)return!1;o=p="only"===a&&!o&&"nextSibling"}return!0}if(o=[g?q.firstChild:q.lastChild],g&&s){k=q[u]||(q[u]={}),j=k[a]||[],n=j[0]===w&&j[1],m=j[0]===w&&j[2],l=n&&q.childNodes[n];while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if(1===l.nodeType&&++m&&l===b){k[a]=[w,n,m];break}}else if(s&&(j=(b[u]||(b[u]={}))[a])&&j[0]===w)m=j[1];else while(l=++n&&l&&l[p]||(m=n=0)||o.pop())if((h?l.nodeName.toLowerCase()===r:1===l.nodeType)&&++m&&(s&&((l[u]||(l[u]={}))[a]=[w,m]),l===b))break;return m-=e,m===d||m%d===0&&m/d>=0}}},PSEUDO:function(a,b){var c,e=d.pseudos[a]||d.setFilters[a.toLowerCase()]||fb.error("unsupported pseudo: "+a);return e[u]?e(b):e.length>1?(c=[a,a,"",b],d.setFilters.hasOwnProperty(a.toLowerCase())?hb(function(a,c){var d,f=e(a,b),g=f.length;while(g--)d=K.call(a,f[g]),a[d]=!(c[d]=f[g])}):function(a){return e(a,0,c)}):e}},pseudos:{not:hb(function(a){var b=[],c=[],d=h(a.replace(R,"$1"));return d[u]?hb(function(a,b,c,e){var f,g=d(a,null,e,[]),h=a.length;while(h--)(f=g[h])&&(a[h]=!(b[h]=f))}):function(a,e,f){return b[0]=a,d(b,null,f,c),!c.pop()}}),has:hb(function(a){return function(b){return fb(a,b).length>0}}),contains:hb(function(a){return function(b){return(b.textContent||b.innerText||e(b)).indexOf(a)>-1}}),lang:hb(function(a){return W.test(a||"")||fb.error("unsupported lang: "+a),a=a.replace(cb,db).toLowerCase(),function(b){var c;do if(c=p?b.lang:b.getAttribute("xml:lang")||b.getAttribute("lang"))return c=c.toLowerCase(),c===a||0===c.indexOf(a+"-");while((b=b.parentNode)&&1===b.nodeType);return!1}}),target:function(b){var c=a.location&&a.location.hash;return c&&c.slice(1)===b.id},root:function(a){return a===o},focus:function(a){return a===n.activeElement&&(!n.hasFocus||n.hasFocus())&&!!(a.type||a.href||~a.tabIndex)},enabled:function(a){return a.disabled===!1},disabled:function(a){return a.disabled===!0},checked:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&!!a.checked||"option"===b&&!!a.selected},selected:function(a){return a.parentNode&&a.parentNode.selectedIndex,a.selected===!0},empty:function(a){for(a=a.firstChild;a;a=a.nextSibling)if(a.nodeType<6)return!1;return!0},parent:function(a){return!d.pseudos.empty(a)},header:function(a){return Z.test(a.nodeName)},input:function(a){return Y.test(a.nodeName)},button:function(a){var b=a.nodeName.toLowerCase();return"input"===b&&"button"===a.type||"button"===b},text:function(a){var b;return"input"===a.nodeName.toLowerCase()&&"text"===a.type&&(null==(b=a.getAttribute("type"))||"text"===b.toLowerCase())},first:nb(function(){return[0]}),last:nb(function(a,b){return[b-1]}),eq:nb(function(a,b,c){return[0>c?c+b:c]}),even:nb(function(a,b){for(var c=0;b>c;c+=2)a.push(c);return a}),odd:nb(function(a,b){for(var c=1;b>c;c+=2)a.push(c);return a}),lt:nb(function(a,b,c){for(var d=0>c?c+b:c;--d>=0;)a.push(d);return a}),gt:nb(function(a,b,c){for(var d=0>c?c+b:c;++db;b++)d+=a[b].value;return d}function rb(a,b,c){var d=b.dir,e=c&&"parentNode"===d,f=x++;return b.first?function(b,c,f){while(b=b[d])if(1===b.nodeType||e)return a(b,c,f)}:function(b,c,g){var h,i,j=[w,f];if(g){while(b=b[d])if((1===b.nodeType||e)&&a(b,c,g))return!0}else while(b=b[d])if(1===b.nodeType||e){if(i=b[u]||(b[u]={}),(h=i[d])&&h[0]===w&&h[1]===f)return j[2]=h[2];if(i[d]=j,j[2]=a(b,c,g))return!0}}}function sb(a){return a.length>1?function(b,c,d){var e=a.length;while(e--)if(!a[e](b,c,d))return!1;return!0}:a[0]}function tb(a,b,c){for(var d=0,e=b.length;e>d;d++)fb(a,b[d],c);return c}function ub(a,b,c,d,e){for(var f,g=[],h=0,i=a.length,j=null!=b;i>h;h++)(f=a[h])&&(!c||c(f,d,e))&&(g.push(f),j&&b.push(h));return g}function vb(a,b,c,d,e,f){return d&&!d[u]&&(d=vb(d)),e&&!e[u]&&(e=vb(e,f)),hb(function(f,g,h,i){var j,k,l,m=[],n=[],o=g.length,p=f||tb(b||"*",h.nodeType?[h]:h,[]),q=!a||!f&&b?p:ub(p,m,a,h,i),r=c?e||(f?a:o||d)?[]:g:q;if(c&&c(q,r,h,i),d){j=ub(r,n),d(j,[],h,i),k=j.length;while(k--)(l=j[k])&&(r[n[k]]=!(q[n[k]]=l))}if(f){if(e||a){if(e){j=[],k=r.length;while(k--)(l=r[k])&&j.push(q[k]=l);e(null,r=[],j,i)}k=r.length;while(k--)(l=r[k])&&(j=e?K.call(f,l):m[k])>-1&&(f[j]=!(g[j]=l))}}else r=ub(r===g?r.splice(o,r.length):r),e?e(null,g,r,i):I.apply(g,r)})}function wb(a){for(var b,c,e,f=a.length,g=d.relative[a[0].type],h=g||d.relative[" "],i=g?1:0,k=rb(function(a){return a===b},h,!0),l=rb(function(a){return K.call(b,a)>-1},h,!0),m=[function(a,c,d){return!g&&(d||c!==j)||((b=c).nodeType?k(a,c,d):l(a,c,d))}];f>i;i++)if(c=d.relative[a[i].type])m=[rb(sb(m),c)];else{if(c=d.filter[a[i].type].apply(null,a[i].matches),c[u]){for(e=++i;f>e;e++)if(d.relative[a[e].type])break;return vb(i>1&&sb(m),i>1&&qb(a.slice(0,i-1).concat({value:" "===a[i-2].type?"*":""})).replace(R,"$1"),c,e>i&&wb(a.slice(i,e)),f>e&&wb(a=a.slice(e)),f>e&&qb(a))}m.push(c)}return sb(m)}function xb(a,b){var c=b.length>0,e=a.length>0,f=function(f,g,h,i,k){var l,m,o,p=0,q="0",r=f&&[],s=[],t=j,u=f||e&&d.find.TAG("*",k),v=w+=null==t?1:Math.random()||.1,x=u.length;for(k&&(j=g!==n&&g);q!==x&&null!=(l=u[q]);q++){if(e&&l){m=0;while(o=a[m++])if(o(l,g,h)){i.push(l);break}k&&(w=v)}c&&((l=!o&&l)&&p--,f&&r.push(l))}if(p+=q,c&&q!==p){m=0;while(o=b[m++])o(r,s,g,h);if(f){if(p>0)while(q--)r[q]||s[q]||(s[q]=G.call(i));s=ub(s)}I.apply(i,s),k&&!f&&s.length>0&&p+b.length>1&&fb.uniqueSort(i)}return k&&(w=v,j=t),r};return c?hb(f):f}return h=fb.compile=function(a,b){var c,d=[],e=[],f=A[a+" "];if(!f){b||(b=g(a)),c=b.length;while(c--)f=wb(b[c]),f[u]?d.push(f):e.push(f);f=A(a,xb(e,d)),f.selector=a}return f},i=fb.select=function(a,b,e,f){var i,j,k,l,m,n="function"==typeof a&&a,o=!f&&g(a=n.selector||a);if(e=e||[],1===o.length){if(j=o[0]=o[0].slice(0),j.length>2&&"ID"===(k=j[0]).type&&c.getById&&9===b.nodeType&&p&&d.relative[j[1].type]){if(b=(d.find.ID(k.matches[0].replace(cb,db),b)||[])[0],!b)return e;n&&(b=b.parentNode),a=a.slice(j.shift().value.length)}i=X.needsContext.test(a)?0:j.length;while(i--){if(k=j[i],d.relative[l=k.type])break;if((m=d.find[l])&&(f=m(k.matches[0].replace(cb,db),ab.test(j[0].type)&&ob(b.parentNode)||b))){if(j.splice(i,1),a=f.length&&qb(j),!a)return I.apply(e,f),e;break}}}return(n||h(a,o))(f,b,!p,e,ab.test(a)&&ob(b.parentNode)||b),e},c.sortStable=u.split("").sort(B).join("")===u,c.detectDuplicates=!!l,m(),c.sortDetached=ib(function(a){return 1&a.compareDocumentPosition(n.createElement("div"))}),ib(function(a){return a.innerHTML="","#"===a.firstChild.getAttribute("href")})||jb("type|href|height|width",function(a,b,c){return c?void 0:a.getAttribute(b,"type"===b.toLowerCase()?1:2)}),c.attributes&&ib(function(a){return a.innerHTML="",a.firstChild.setAttribute("value",""),""===a.firstChild.getAttribute("value")})||jb("value",function(a,b,c){return c||"input"!==a.nodeName.toLowerCase()?void 0:a.defaultValue}),ib(function(a){return null==a.getAttribute("disabled")})||jb(L,function(a,b,c){var d;return c?void 0:a[b]===!0?b.toLowerCase():(d=a.getAttributeNode(b))&&d.specified?d.value:null}),fb}(a);m.find=s,m.expr=s.selectors,m.expr[":"]=m.expr.pseudos,m.unique=s.uniqueSort,m.text=s.getText,m.isXMLDoc=s.isXML,m.contains=s.contains;var t=m.expr.match.needsContext,u=/^<(\w+)\s*\/?>(?:<\/\1>|)$/,v=/^.[^:#\[\.,]*$/;function w(a,b,c){if(m.isFunction(b))return m.grep(a,function(a,d){return!!b.call(a,d,a)!==c});if(b.nodeType)return m.grep(a,function(a){return a===b!==c});if("string"==typeof b){if(v.test(b))return m.filter(b,a,c);b=m.filter(b,a)}return m.grep(a,function(a){return m.inArray(a,b)>=0!==c})}m.filter=function(a,b,c){var d=b[0];return c&&(a=":not("+a+")"),1===b.length&&1===d.nodeType?m.find.matchesSelector(d,a)?[d]:[]:m.find.matches(a,m.grep(b,function(a){return 1===a.nodeType}))},m.fn.extend({find:function(a){var b,c=[],d=this,e=d.length;if("string"!=typeof a)return this.pushStack(m(a).filter(function(){for(b=0;e>b;b++)if(m.contains(d[b],this))return!0}));for(b=0;e>b;b++)m.find(a,d[b],c);return c=this.pushStack(e>1?m.unique(c):c),c.selector=this.selector?this.selector+" "+a:a,c},filter:function(a){return this.pushStack(w(this,a||[],!1))},not:function(a){return this.pushStack(w(this,a||[],!0))},is:function(a){return!!w(this,"string"==typeof a&&t.test(a)?m(a):a||[],!1).length}});var x,y=a.document,z=/^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]*))$/,A=m.fn.init=function(a,b){var c,d;if(!a)return this;if("string"==typeof a){if(c="<"===a.charAt(0)&&">"===a.charAt(a.length-1)&&a.length>=3?[null,a,null]:z.exec(a),!c||!c[1]&&b)return!b||b.jquery?(b||x).find(a):this.constructor(b).find(a);if(c[1]){if(b=b instanceof m?b[0]:b,m.merge(this,m.parseHTML(c[1],b&&b.nodeType?b.ownerDocument||b:y,!0)),u.test(c[1])&&m.isPlainObject(b))for(c in b)m.isFunction(this[c])?this[c](b[c]):this.attr(c,b[c]);return this}if(d=y.getElementById(c[2]),d&&d.parentNode){if(d.id!==c[2])return x.find(a);this.length=1,this[0]=d}return this.context=y,this.selector=a,this}return a.nodeType?(this.context=this[0]=a,this.length=1,this):m.isFunction(a)?"undefined"!=typeof x.ready?x.ready(a):a(m):(void 0!==a.selector&&(this.selector=a.selector,this.context=a.context),m.makeArray(a,this))};A.prototype=m.fn,x=m(y);var B=/^(?:parents|prev(?:Until|All))/,C={children:!0,contents:!0,next:!0,prev:!0};m.extend({dir:function(a,b,c){var d=[],e=a[b];while(e&&9!==e.nodeType&&(void 0===c||1!==e.nodeType||!m(e).is(c)))1===e.nodeType&&d.push(e),e=e[b];return d},sibling:function(a,b){for(var c=[];a;a=a.nextSibling)1===a.nodeType&&a!==b&&c.push(a);return c}}),m.fn.extend({has:function(a){var b,c=m(a,this),d=c.length;return this.filter(function(){for(b=0;d>b;b++)if(m.contains(this,c[b]))return!0})},closest:function(a,b){for(var c,d=0,e=this.length,f=[],g=t.test(a)||"string"!=typeof a?m(a,b||this.context):0;e>d;d++)for(c=this[d];c&&c!==b;c=c.parentNode)if(c.nodeType<11&&(g?g.index(c)>-1:1===c.nodeType&&m.find.matchesSelector(c,a))){f.push(c);break}return this.pushStack(f.length>1?m.unique(f):f)},index:function(a){return a?"string"==typeof a?m.inArray(this[0],m(a)):m.inArray(a.jquery?a[0]:a,this):this[0]&&this[0].parentNode?this.first().prevAll().length:-1},add:function(a,b){return this.pushStack(m.unique(m.merge(this.get(),m(a,b))))},addBack:function(a){return this.add(null==a?this.prevObject:this.prevObject.filter(a))}});function D(a,b){do a=a[b];while(a&&1!==a.nodeType);return a}m.each({parent:function(a){var b=a.parentNode;return b&&11!==b.nodeType?b:null},parents:function(a){return m.dir(a,"parentNode")},parentsUntil:function(a,b,c){return m.dir(a,"parentNode",c)},next:function(a){return D(a,"nextSibling")},prev:function(a){return D(a,"previousSibling")},nextAll:function(a){return m.dir(a,"nextSibling")},prevAll:function(a){return m.dir(a,"previousSibling")},nextUntil:function(a,b,c){return m.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return m.dir(a,"previousSibling",c)},siblings:function(a){return m.sibling((a.parentNode||{}).firstChild,a)},children:function(a){return m.sibling(a.firstChild)},contents:function(a){return m.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:m.merge([],a.childNodes)}},function(a,b){m.fn[a]=function(c,d){var e=m.map(this,b,c);return"Until"!==a.slice(-5)&&(d=c),d&&"string"==typeof d&&(e=m.filter(d,e)),this.length>1&&(C[a]||(e=m.unique(e)),B.test(a)&&(e=e.reverse())),this.pushStack(e)}});var E=/\S+/g,F={};function G(a){var b=F[a]={};return m.each(a.match(E)||[],function(a,c){b[c]=!0}),b}m.Callbacks=function(a){a="string"==typeof a?F[a]||G(a):m.extend({},a);var b,c,d,e,f,g,h=[],i=!a.once&&[],j=function(l){for(c=a.memory&&l,d=!0,f=g||0,g=0,e=h.length,b=!0;h&&e>f;f++)if(h[f].apply(l[0],l[1])===!1&&a.stopOnFalse){c=!1;break}b=!1,h&&(i?i.length&&j(i.shift()):c?h=[]:k.disable())},k={add:function(){if(h){var d=h.length;!function f(b){m.each(b,function(b,c){var d=m.type(c);"function"===d?a.unique&&k.has(c)||h.push(c):c&&c.length&&"string"!==d&&f(c)})}(arguments),b?e=h.length:c&&(g=d,j(c))}return this},remove:function(){return h&&m.each(arguments,function(a,c){var d;while((d=m.inArray(c,h,d))>-1)h.splice(d,1),b&&(e>=d&&e--,f>=d&&f--)}),this},has:function(a){return a?m.inArray(a,h)>-1:!(!h||!h.length)},empty:function(){return h=[],e=0,this},disable:function(){return h=i=c=void 0,this},disabled:function(){return!h},lock:function(){return i=void 0,c||k.disable(),this},locked:function(){return!i},fireWith:function(a,c){return!h||d&&!i||(c=c||[],c=[a,c.slice?c.slice():c],b?i.push(c):j(c)),this},fire:function(){return k.fireWith(this,arguments),this},fired:function(){return!!d}};return k},m.extend({Deferred:function(a){var b=[["resolve","done",m.Callbacks("once memory"),"resolved"],["reject","fail",m.Callbacks("once memory"),"rejected"],["notify","progress",m.Callbacks("memory")]],c="pending",d={state:function(){return c},always:function(){return e.done(arguments).fail(arguments),this},then:function(){var a=arguments;return m.Deferred(function(c){m.each(b,function(b,f){var g=m.isFunction(a[b])&&a[b];e[f[1]](function(){var a=g&&g.apply(this,arguments);a&&m.isFunction(a.promise)?a.promise().done(c.resolve).fail(c.reject).progress(c.notify):c[f[0]+"With"](this===d?c.promise():this,g?[a]:arguments)})}),a=null}).promise()},promise:function(a){return null!=a?m.extend(a,d):d}},e={};return d.pipe=d.then,m.each(b,function(a,f){var g=f[2],h=f[3];d[f[1]]=g.add,h&&g.add(function(){c=h},b[1^a][2].disable,b[2][2].lock),e[f[0]]=function(){return e[f[0]+"With"](this===e?d:this,arguments),this},e[f[0]+"With"]=g.fireWith}),d.promise(e),a&&a.call(e,e),e},when:function(a){var b=0,c=d.call(arguments),e=c.length,f=1!==e||a&&m.isFunction(a.promise)?e:0,g=1===f?a:m.Deferred(),h=function(a,b,c){return function(e){b[a]=this,c[a]=arguments.length>1?d.call(arguments):e,c===i?g.notifyWith(b,c):--f||g.resolveWith(b,c)}},i,j,k;if(e>1)for(i=new Array(e),j=new Array(e),k=new Array(e);e>b;b++)c[b]&&m.isFunction(c[b].promise)?c[b].promise().done(h(b,k,c)).fail(g.reject).progress(h(b,j,i)):--f;return f||g.resolveWith(k,c),g.promise()}});var H;m.fn.ready=function(a){return m.ready.promise().done(a),this},m.extend({isReady:!1,readyWait:1,holdReady:function(a){a?m.readyWait++:m.ready(!0)},ready:function(a){if(a===!0?!--m.readyWait:!m.isReady){if(!y.body)return setTimeout(m.ready);m.isReady=!0,a!==!0&&--m.readyWait>0||(H.resolveWith(y,[m]),m.fn.triggerHandler&&(m(y).triggerHandler("ready"),m(y).off("ready")))}}});function I(){y.addEventListener?(y.removeEventListener("DOMContentLoaded",J,!1),a.removeEventListener("load",J,!1)):(y.detachEvent("onreadystatechange",J),a.detachEvent("onload",J))}function J(){(y.addEventListener||"load"===event.type||"complete"===y.readyState)&&(I(),m.ready())}m.ready.promise=function(b){if(!H)if(H=m.Deferred(),"complete"===y.readyState)setTimeout(m.ready);else if(y.addEventListener)y.addEventListener("DOMContentLoaded",J,!1),a.addEventListener("load",J,!1);else{y.attachEvent("onreadystatechange",J),a.attachEvent("onload",J);var c=!1;try{c=null==a.frameElement&&y.documentElement}catch(d){}c&&c.doScroll&&!function e(){if(!m.isReady){try{c.doScroll("left")}catch(a){return setTimeout(e,50)}I(),m.ready()}}()}return H.promise(b)};var K="undefined",L;for(L in m(k))break;k.ownLast="0"!==L,k.inlineBlockNeedsLayout=!1,m(function(){var a,b,c,d;c=y.getElementsByTagName("body")[0],c&&c.style&&(b=y.createElement("div"),d=y.createElement("div"),d.style.cssText="position:absolute;border:0;width:0;height:0;top:0;left:-9999px",c.appendChild(d).appendChild(b),typeof b.style.zoom!==K&&(b.style.cssText="display:inline;margin:0;border:0;padding:1px;width:1px;zoom:1",k.inlineBlockNeedsLayout=a=3===b.offsetWidth,a&&(c.style.zoom=1)),c.removeChild(d))}),function(){var a=y.createElement("div");if(null==k.deleteExpando){k.deleteExpando=!0;try{delete a.test}catch(b){k.deleteExpando=!1}}a=null}(),m.acceptData=function(a){var b=m.noData[(a.nodeName+" ").toLowerCase()],c=+a.nodeType||1;return 1!==c&&9!==c?!1:!b||b!==!0&&a.getAttribute("classid")===b};var M=/^(?:\{[\w\W]*\}|\[[\w\W]*\])$/,N=/([A-Z])/g;function O(a,b,c){if(void 0===c&&1===a.nodeType){var d="data-"+b.replace(N,"-$1").toLowerCase();if(c=a.getAttribute(d),"string"==typeof c){try{c="true"===c?!0:"false"===c?!1:"null"===c?null:+c+""===c?+c:M.test(c)?m.parseJSON(c):c}catch(e){}m.data(a,b,c)}else c=void 0}return c}function P(a){var b;for(b in a)if(("data"!==b||!m.isEmptyObject(a[b]))&&"toJSON"!==b)return!1;return!0}function Q(a,b,d,e){if(m.acceptData(a)){var f,g,h=m.expando,i=a.nodeType,j=i?m.cache:a,k=i?a[h]:a[h]&&h;
-if(k&&j[k]&&(e||j[k].data)||void 0!==d||"string"!=typeof b)return k||(k=i?a[h]=c.pop()||m.guid++:h),j[k]||(j[k]=i?{}:{toJSON:m.noop}),("object"==typeof b||"function"==typeof b)&&(e?j[k]=m.extend(j[k],b):j[k].data=m.extend(j[k].data,b)),g=j[k],e||(g.data||(g.data={}),g=g.data),void 0!==d&&(g[m.camelCase(b)]=d),"string"==typeof b?(f=g[b],null==f&&(f=g[m.camelCase(b)])):f=g,f}}function R(a,b,c){if(m.acceptData(a)){var d,e,f=a.nodeType,g=f?m.cache:a,h=f?a[m.expando]:m.expando;if(g[h]){if(b&&(d=c?g[h]:g[h].data)){m.isArray(b)?b=b.concat(m.map(b,m.camelCase)):b in d?b=[b]:(b=m.camelCase(b),b=b in d?[b]:b.split(" ")),e=b.length;while(e--)delete d[b[e]];if(c?!P(d):!m.isEmptyObject(d))return}(c||(delete g[h].data,P(g[h])))&&(f?m.cleanData([a],!0):k.deleteExpando||g!=g.window?delete g[h]:g[h]=null)}}}m.extend({cache:{},noData:{"applet ":!0,"embed ":!0,"object ":"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000"},hasData:function(a){return a=a.nodeType?m.cache[a[m.expando]]:a[m.expando],!!a&&!P(a)},data:function(a,b,c){return Q(a,b,c)},removeData:function(a,b){return R(a,b)},_data:function(a,b,c){return Q(a,b,c,!0)},_removeData:function(a,b){return R(a,b,!0)}}),m.fn.extend({data:function(a,b){var c,d,e,f=this[0],g=f&&f.attributes;if(void 0===a){if(this.length&&(e=m.data(f),1===f.nodeType&&!m._data(f,"parsedAttrs"))){c=g.length;while(c--)g[c]&&(d=g[c].name,0===d.indexOf("data-")&&(d=m.camelCase(d.slice(5)),O(f,d,e[d])));m._data(f,"parsedAttrs",!0)}return e}return"object"==typeof a?this.each(function(){m.data(this,a)}):arguments.length>1?this.each(function(){m.data(this,a,b)}):f?O(f,a,m.data(f,a)):void 0},removeData:function(a){return this.each(function(){m.removeData(this,a)})}}),m.extend({queue:function(a,b,c){var d;return a?(b=(b||"fx")+"queue",d=m._data(a,b),c&&(!d||m.isArray(c)?d=m._data(a,b,m.makeArray(c)):d.push(c)),d||[]):void 0},dequeue:function(a,b){b=b||"fx";var c=m.queue(a,b),d=c.length,e=c.shift(),f=m._queueHooks(a,b),g=function(){m.dequeue(a,b)};"inprogress"===e&&(e=c.shift(),d--),e&&("fx"===b&&c.unshift("inprogress"),delete f.stop,e.call(a,g,f)),!d&&f&&f.empty.fire()},_queueHooks:function(a,b){var c=b+"queueHooks";return m._data(a,c)||m._data(a,c,{empty:m.Callbacks("once memory").add(function(){m._removeData(a,b+"queue"),m._removeData(a,c)})})}}),m.fn.extend({queue:function(a,b){var c=2;return"string"!=typeof a&&(b=a,a="fx",c--),arguments.lengthh;h++)b(a[h],c,g?d:d.call(a[h],h,b(a[h],c)));return e?a:j?b.call(a):i?b(a[0],c):f},W=/^(?:checkbox|radio)$/i;!function(){var a=y.createElement("input"),b=y.createElement("div"),c=y.createDocumentFragment();if(b.innerHTML="
a",k.leadingWhitespace=3===b.firstChild.nodeType,k.tbody=!b.getElementsByTagName("tbody").length,k.htmlSerialize=!!b.getElementsByTagName("link").length,k.html5Clone="<:nav>"!==y.createElement("nav").cloneNode(!0).outerHTML,a.type="checkbox",a.checked=!0,c.appendChild(a),k.appendChecked=a.checked,b.innerHTML="",k.noCloneChecked=!!b.cloneNode(!0).lastChild.defaultValue,c.appendChild(b),b.innerHTML="",k.checkClone=b.cloneNode(!0).cloneNode(!0).lastChild.checked,k.noCloneEvent=!0,b.attachEvent&&(b.attachEvent("onclick",function(){k.noCloneEvent=!1}),b.cloneNode(!0).click()),null==k.deleteExpando){k.deleteExpando=!0;try{delete b.test}catch(d){k.deleteExpando=!1}}}(),function(){var b,c,d=y.createElement("div");for(b in{submit:!0,change:!0,focusin:!0})c="on"+b,(k[b+"Bubbles"]=c in a)||(d.setAttribute(c,"t"),k[b+"Bubbles"]=d.attributes[c].expando===!1);d=null}();var X=/^(?:input|select|textarea)$/i,Y=/^key/,Z=/^(?:mouse|pointer|contextmenu)|click/,$=/^(?:focusinfocus|focusoutblur)$/,_=/^([^.]*)(?:\.(.+)|)$/;function ab(){return!0}function bb(){return!1}function cb(){try{return y.activeElement}catch(a){}}m.event={global:{},add:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m._data(a);if(r){c.handler&&(i=c,c=i.handler,e=i.selector),c.guid||(c.guid=m.guid++),(g=r.events)||(g=r.events={}),(k=r.handle)||(k=r.handle=function(a){return typeof m===K||a&&m.event.triggered===a.type?void 0:m.event.dispatch.apply(k.elem,arguments)},k.elem=a),b=(b||"").match(E)||[""],h=b.length;while(h--)f=_.exec(b[h])||[],o=q=f[1],p=(f[2]||"").split(".").sort(),o&&(j=m.event.special[o]||{},o=(e?j.delegateType:j.bindType)||o,j=m.event.special[o]||{},l=m.extend({type:o,origType:q,data:d,handler:c,guid:c.guid,selector:e,needsContext:e&&m.expr.match.needsContext.test(e),namespace:p.join(".")},i),(n=g[o])||(n=g[o]=[],n.delegateCount=0,j.setup&&j.setup.call(a,d,p,k)!==!1||(a.addEventListener?a.addEventListener(o,k,!1):a.attachEvent&&a.attachEvent("on"+o,k))),j.add&&(j.add.call(a,l),l.handler.guid||(l.handler.guid=c.guid)),e?n.splice(n.delegateCount++,0,l):n.push(l),m.event.global[o]=!0);a=null}},remove:function(a,b,c,d,e){var f,g,h,i,j,k,l,n,o,p,q,r=m.hasData(a)&&m._data(a);if(r&&(k=r.events)){b=(b||"").match(E)||[""],j=b.length;while(j--)if(h=_.exec(b[j])||[],o=q=h[1],p=(h[2]||"").split(".").sort(),o){l=m.event.special[o]||{},o=(d?l.delegateType:l.bindType)||o,n=k[o]||[],h=h[2]&&new RegExp("(^|\\.)"+p.join("\\.(?:.*\\.|)")+"(\\.|$)"),i=f=n.length;while(f--)g=n[f],!e&&q!==g.origType||c&&c.guid!==g.guid||h&&!h.test(g.namespace)||d&&d!==g.selector&&("**"!==d||!g.selector)||(n.splice(f,1),g.selector&&n.delegateCount--,l.remove&&l.remove.call(a,g));i&&!n.length&&(l.teardown&&l.teardown.call(a,p,r.handle)!==!1||m.removeEvent(a,o,r.handle),delete k[o])}else for(o in k)m.event.remove(a,o+b[j],c,d,!0);m.isEmptyObject(k)&&(delete r.handle,m._removeData(a,"events"))}},trigger:function(b,c,d,e){var f,g,h,i,k,l,n,o=[d||y],p=j.call(b,"type")?b.type:b,q=j.call(b,"namespace")?b.namespace.split("."):[];if(h=l=d=d||y,3!==d.nodeType&&8!==d.nodeType&&!$.test(p+m.event.triggered)&&(p.indexOf(".")>=0&&(q=p.split("."),p=q.shift(),q.sort()),g=p.indexOf(":")<0&&"on"+p,b=b[m.expando]?b:new m.Event(p,"object"==typeof b&&b),b.isTrigger=e?2:3,b.namespace=q.join("."),b.namespace_re=b.namespace?new RegExp("(^|\\.)"+q.join("\\.(?:.*\\.|)")+"(\\.|$)"):null,b.result=void 0,b.target||(b.target=d),c=null==c?[b]:m.makeArray(c,[b]),k=m.event.special[p]||{},e||!k.trigger||k.trigger.apply(d,c)!==!1)){if(!e&&!k.noBubble&&!m.isWindow(d)){for(i=k.delegateType||p,$.test(i+p)||(h=h.parentNode);h;h=h.parentNode)o.push(h),l=h;l===(d.ownerDocument||y)&&o.push(l.defaultView||l.parentWindow||a)}n=0;while((h=o[n++])&&!b.isPropagationStopped())b.type=n>1?i:k.bindType||p,f=(m._data(h,"events")||{})[b.type]&&m._data(h,"handle"),f&&f.apply(h,c),f=g&&h[g],f&&f.apply&&m.acceptData(h)&&(b.result=f.apply(h,c),b.result===!1&&b.preventDefault());if(b.type=p,!e&&!b.isDefaultPrevented()&&(!k._default||k._default.apply(o.pop(),c)===!1)&&m.acceptData(d)&&g&&d[p]&&!m.isWindow(d)){l=d[g],l&&(d[g]=null),m.event.triggered=p;try{d[p]()}catch(r){}m.event.triggered=void 0,l&&(d[g]=l)}return b.result}},dispatch:function(a){a=m.event.fix(a);var b,c,e,f,g,h=[],i=d.call(arguments),j=(m._data(this,"events")||{})[a.type]||[],k=m.event.special[a.type]||{};if(i[0]=a,a.delegateTarget=this,!k.preDispatch||k.preDispatch.call(this,a)!==!1){h=m.event.handlers.call(this,a,j),b=0;while((f=h[b++])&&!a.isPropagationStopped()){a.currentTarget=f.elem,g=0;while((e=f.handlers[g++])&&!a.isImmediatePropagationStopped())(!a.namespace_re||a.namespace_re.test(e.namespace))&&(a.handleObj=e,a.data=e.data,c=((m.event.special[e.origType]||{}).handle||e.handler).apply(f.elem,i),void 0!==c&&(a.result=c)===!1&&(a.preventDefault(),a.stopPropagation()))}return k.postDispatch&&k.postDispatch.call(this,a),a.result}},handlers:function(a,b){var c,d,e,f,g=[],h=b.delegateCount,i=a.target;if(h&&i.nodeType&&(!a.button||"click"!==a.type))for(;i!=this;i=i.parentNode||this)if(1===i.nodeType&&(i.disabled!==!0||"click"!==a.type)){for(e=[],f=0;h>f;f++)d=b[f],c=d.selector+" ",void 0===e[c]&&(e[c]=d.needsContext?m(c,this).index(i)>=0:m.find(c,this,null,[i]).length),e[c]&&e.push(d);e.length&&g.push({elem:i,handlers:e})}return h ]","i"),hb=/^\s+/,ib=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/gi,jb=/<([\w:]+)/,kb=/\s*$/g,rb={option:[1,""],legend:[1,""],area:[1,""],param:[1,""],thead:[1,"","
"],tr:[2,"","
"],col:[2,"","
"],td:[3,"","
"],_default:k.htmlSerialize?[0,"",""]:[1,"X",""]},sb=db(y),tb=sb.appendChild(y.createElement("div"));rb.optgroup=rb.option,rb.tbody=rb.tfoot=rb.colgroup=rb.caption=rb.thead,rb.th=rb.td;function ub(a,b){var c,d,e=0,f=typeof a.getElementsByTagName!==K?a.getElementsByTagName(b||"*"):typeof a.querySelectorAll!==K?a.querySelectorAll(b||"*"):void 0;if(!f)for(f=[],c=a.childNodes||a;null!=(d=c[e]);e++)!b||m.nodeName(d,b)?f.push(d):m.merge(f,ub(d,b));return void 0===b||b&&m.nodeName(a,b)?m.merge([a],f):f}function vb(a){W.test(a.type)&&(a.defaultChecked=a.checked)}function wb(a,b){return m.nodeName(a,"table")&&m.nodeName(11!==b.nodeType?b:b.firstChild,"tr")?a.getElementsByTagName("tbody")[0]||a.appendChild(a.ownerDocument.createElement("tbody")):a}function xb(a){return a.type=(null!==m.find.attr(a,"type"))+"/"+a.type,a}function yb(a){var b=pb.exec(a.type);return b?a.type=b[1]:a.removeAttribute("type"),a}function zb(a,b){for(var c,d=0;null!=(c=a[d]);d++)m._data(c,"globalEval",!b||m._data(b[d],"globalEval"))}function Ab(a,b){if(1===b.nodeType&&m.hasData(a)){var c,d,e,f=m._data(a),g=m._data(b,f),h=f.events;if(h){delete g.handle,g.events={};for(c in h)for(d=0,e=h[c].length;e>d;d++)m.event.add(b,c,h[c][d])}g.data&&(g.data=m.extend({},g.data))}}function Bb(a,b){var c,d,e;if(1===b.nodeType){if(c=b.nodeName.toLowerCase(),!k.noCloneEvent&&b[m.expando]){e=m._data(b);for(d in e.events)m.removeEvent(b,d,e.handle);b.removeAttribute(m.expando)}"script"===c&&b.text!==a.text?(xb(b).text=a.text,yb(b)):"object"===c?(b.parentNode&&(b.outerHTML=a.outerHTML),k.html5Clone&&a.innerHTML&&!m.trim(b.innerHTML)&&(b.innerHTML=a.innerHTML)):"input"===c&&W.test(a.type)?(b.defaultChecked=b.checked=a.checked,b.value!==a.value&&(b.value=a.value)):"option"===c?b.defaultSelected=b.selected=a.defaultSelected:("input"===c||"textarea"===c)&&(b.defaultValue=a.defaultValue)}}m.extend({clone:function(a,b,c){var d,e,f,g,h,i=m.contains(a.ownerDocument,a);if(k.html5Clone||m.isXMLDoc(a)||!gb.test("<"+a.nodeName+">")?f=a.cloneNode(!0):(tb.innerHTML=a.outerHTML,tb.removeChild(f=tb.firstChild)),!(k.noCloneEvent&&k.noCloneChecked||1!==a.nodeType&&11!==a.nodeType||m.isXMLDoc(a)))for(d=ub(f),h=ub(a),g=0;null!=(e=h[g]);++g)d[g]&&Bb(e,d[g]);if(b)if(c)for(h=h||ub(a),d=d||ub(f),g=0;null!=(e=h[g]);g++)Ab(e,d[g]);else Ab(a,f);return d=ub(f,"script"),d.length>0&&zb(d,!i&&ub(a,"script")),d=h=e=null,f},buildFragment:function(a,b,c,d){for(var e,f,g,h,i,j,l,n=a.length,o=db(b),p=[],q=0;n>q;q++)if(f=a[q],f||0===f)if("object"===m.type(f))m.merge(p,f.nodeType?[f]:f);else if(lb.test(f)){h=h||o.appendChild(b.createElement("div")),i=(jb.exec(f)||["",""])[1].toLowerCase(),l=rb[i]||rb._default,h.innerHTML=l[1]+f.replace(ib,"<$1>$2>")+l[2],e=l[0];while(e--)h=h.lastChild;if(!k.leadingWhitespace&&hb.test(f)&&p.push(b.createTextNode(hb.exec(f)[0])),!k.tbody){f="table"!==i||kb.test(f)?"