HEX
Server: Apache
System: Linux vps.rockyroadprinting.net 4.18.0 #1 SMP Mon Sep 30 15:36:27 MSK 2024 x86_64
User: rockyroadprintin (1011)
PHP: 8.2.29
Disabled: exec,passthru,shell_exec,system
Upload Files
File: //lib/python2.7/site-packages/bs4/__init__.pyc
�
qap[c@s|dZdZdZdZdZdgZddlZddlZddlZddl	Z	ddl
Z
dd	lmZm
Z
dd
lmZddlmZmZmZmZmZmZmZmZmZmZmZdd
kdefd��YZeZeZdefd��YZde fd��YZ!de"fd��YZ#e$dkrxddlZeej%�Z&e&j'�GHndS(sHBeautiful Soup
Elixir and Tonic
"The Screen-Scraper's Friend"
http://www.crummy.com/software/BeautifulSoup/

Beautiful Soup uses a pluggable XML or HTML parser to parse a
(possibly invalid) document into a tree representation. Beautiful Soup
provides methods and Pythonic idioms that make it easy to navigate,
search, and modify the parse tree.

Beautiful Soup works with Python 2.7 and up. It works better if lxml
and/or html5lib is installed.

For more than you ever wanted to know about Beautiful Soup, see the
documentation:
http://www.crummy.com/software/BeautifulSoup/bs4/doc/

s*Leonard Richardson (leonardr@segfault.org)s4.6.3s*Copyright (c) 2004-2018 Leonard RichardsontMITt
BeautifulSoupi����Ni(tbuilder_registrytParserRejectedMarkup(t
UnicodeDammit(tCDatatCommenttDEFAULT_OUTPUT_ENCODINGtDeclarationtDoctypetNavigableStringtPageElementtProcessingInstructiont	ResultSettSoupStrainertTags`You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.suYou need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).cBseZdZdZddgZdZdZddddddd�Zd�Z	d	�Z
ed
��Zd�Z
d�Zddid
�Zed�Zd�Zd�Zd�Zd�Zed�Zddd�Zded�Zd�Zdd�Zd�Zeedd�ZRS(s
    This class defines the basic interface called by the tree builders.

    These methods will be called by the parser:
      reset()
      feed(markup)

    The tree builder may call these methods from its feed() implementation:
      handle_starttag(name, attrs) # See note about return value
      handle_endtag(name)
      handle_data(data) # Appends to the current data node
      endData(containerClass=NavigableString) # Ends the current data node

    No matter how complicated the underlying parser is, you should be
    able to build a tree using 'start tag' events, 'end tag' events,
    'data' events, and "done with data" events.

    If you encounter an empty-element tag (aka a self-closing tag,
    like HTML's <br> tag), call handle_starttag and then
    handle_endtag.
    u
[document]thtmltfasts 
	
s�No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system ("%(parser)s"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.

The code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features="%(parser)s"' to the BeautifulSoup constructor.
tc	s�d�krtjd�nd�kr?�d=tjd�nd�krb�d=tjd�nd�kr��d=tjd�nd	�kr��d	=tjd
�n�fd�}|p�|dd
�}|p�|dd�}|rt|t�rtjd�d)}nt��dkrC�j�j�}	td|	��n|d)kr�|}
t|t	�rp|g}n|d)ks�t|�dkr�|j
}ntj|�}|d)kr�t
ddj|���n|�}|
|jkp�|
|jks�|jr
d}nd}d)}
ytjd�}
Wntk
r<nX|
rX|
j}|
j}ntj}d}|jd�}|r�|j�}|jd*�r�|d }q�n|r�td|d|d|jd|�}tj|j|d d!�q�q�n||_|j|_|j|_||j_||_ t!|d"�rH|j"�}n�t|�d#krAt|t#�rud$|ks�t|t�rAd%|krAt|t�r�t$j%j&r�|j'd&�}n|}t(}yt$j%j)|�}Wnt*k
r�}nX|r1t|t�r|j'd&�}ntjd'|�n|j+|�nxh|jj,||d(|�D]K\|_-|_.|_/|_0|j1�y|j2�PWq]t3k
r�q]Xq]Wd)|_-d)|j_d)S(+s_Constructor.

        :param markup: A string or a file-like object representing
        markup to be parsed.

        :param features: Desirable features of the parser to be used. This
        may be the name of a specific parser ("lxml", "lxml-xml",
        "html.parser", or "html5lib") or it may be the type of markup
        to be used ("html", "html5", "xml"). It's recommended that you
        name a specific parser, so that Beautiful Soup gives you the
        same results across platforms and virtual environments.

        :param builder: A specific TreeBuilder to use instead of looking one
        up based on `features`. You shouldn't need to use this.

        :param parse_only: A SoupStrainer. Only parts of the document
        matching the SoupStrainer will be considered. This is useful
        when parsing part of a document that would otherwise be too
        large to fit into memory.

        :param from_encoding: A string indicating the encoding of the
        document to be parsed. Pass this in if Beautiful Soup is
        guessing wrongly about the document's encoding.

        :param exclude_encodings: A list of strings indicating
        encodings known to be wrong. Pass this in if you don't know
        the document's encoding but you know Beautiful Soup's guess is
        wrong.

        :param kwargs: For backwards compatibility purposes, the
        constructor accepts certain keyword arguments used in
        Beautiful Soup 3. None of these arguments do anything in
        Beautiful Soup 4 and there's no need to actually pass keyword
        arguments into the constructor.
        tconvertEntitiess�BS4 does not respect the convertEntities argument to the BeautifulSoup constructor. Entities are always converted to Unicode characters.t
markupMassages�BS4 does not respect the markupMassage argument to the BeautifulSoup constructor. The tree builder is responsible for any necessary markup massage.t
smartQuotesTos�BS4 does not respect the smartQuotesTo argument to the BeautifulSoup constructor. Smart quotes are always converted to Unicode characters.tselfClosingTagss�BS4 does not respect the selfClosingTags argument to the BeautifulSoup constructor. The tree builder is responsible for understanding self-closing tags.tisHTMLs�BS4 does not respect the isHTML argument to the BeautifulSoup constructor. Suggest you use features='lxml' for HTML and features='lxml-xml' for XML.cs<|�kr8tjd||f��|}�|=|SdS(NsLThe "%s" argument to the BeautifulSoup constructor has been renamed to "%s."(twarningstwarntNone(told_nametnew_nametvalue(tkwargs(s0/usr/lib/python2.7/site-packages/bs4/__init__.pytdeprecated_argument�s
tparseOnlyTheset
parse_onlytfromEncodingt
from_encodingslYou provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.is2__init__() got an unexpected keyword argument '%s'sjCouldn't find a tree builder with the features you requested: %s. Do you need to install a parser library?t,tXMLtHTMLit__file__s.pycs.pyoi����tfilenametline_numbertparsertmarkup_typet
stacklevelitreadit<u<tutf8sw"%s" looks like a filename, not markup. You should probably open this file and pass the filehandle into Beautiful Soup.texclude_encodingsN(s.pycs.pyo(4RRt
isinstancetunicodeRtlentkeystpopt	TypeErrort
basestringtDEFAULT_BUILDER_FEATURESRtlookuptFeatureNotFoundtjointNAMEtALTERNATE_NAMEStis_xmltsyst	_getframet
ValueErrort	f_globalstf_linenot__dict__tgettlowertendswithtdicttNO_PARSER_SPECIFIED_WARNINGtbuildert	known_xmltsoupR!thasattrR-tbytestostpathtsupports_unicode_filenamestencodetFalsetexistst	Exceptiont_check_markup_is_urltprepare_markuptmarkuptoriginal_encodingtdeclared_html_encodingtcontains_replacement_characterstresett_feedR(tselfRXtfeaturesRJR!R#R0RRtargtoriginal_featurest
builder_classR+tcallertglobalsR)R(tfnltvaluestpossible_filenametis_filete((Rs0/usr/lib/python2.7/site-packages/bs4/__init__.pyt__init__Xs�'





		
				
				#		
	.


	cCs:t|�|jd�d|jdd�}|j|_|S(Nsutf-8RJR#(ttypeRRRJRY(R^tcopy((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt__copy__$s	!cCs9t|j�}d|kr5|jjr5d|d<n|S(NRJ(RHRDRJt	picklableR(R^td((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt__getstate__0s
cs�t�t�rd}d}n"t�t�r<d}d
}ndSt�fd�|D��r�|�kr�t�t�r��jd	d
�}n�}tjd|�q�ndS(s� 
        Check if markup looks like it's actually a url and raise a warning 
        if so. Markup can be unicode or str (py2) / bytes (py3).
        t shttp:shttps:u uhttp:uhttps:Nc3s|]}�j|�VqdS(N(t
startswith(t.0tprefix(RX(s0/usr/lib/python2.7/site-packages/bs4/__init__.pys	<genexpr>Fssutf-8treplaces�"%s" looks like a URL. Beautiful Soup is not an HTTP client. You should probably use an HTTP client like requests to get the document behind the URL, and feed that document to Beautiful Soup.(shttp:shttps:(uhttp:uhttps:(R1RNR2tanytdecodeRR(RXtspacetcant_start_withtdecoded_markup((RXs0/usr/lib/python2.7/site-packages/bs4/__init__.pyRV7s		cCsT|jj�|jj|j�|j�x#|jj|jkrO|j�q-WdS(N(	RJR\tfeedRXtendDatat
currentTagtnamet
ROOT_TAG_NAMEtpopTag(R^((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR]Ss


cCsgtj|||j|j�d|_|jj�g|_d|_g|_	g|_
|j|�dS(Ni(RRjRJRthiddenR\tcurrent_dataRR}ttagStacktpreserve_whitespace_tag_stacktpushTag(R^((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR\]s	
				cKs)|j|�td|j||||�S(s+Create a new tag associated with this soup.N(tupdateRRRJ(R^R~t	namespacetnsprefixtattrstkwattrs((s0/usr/lib/python2.7/site-packages/bs4/__init__.pytnew_taggs
cCs
||�S(s7Create a new NavigableString associated with this soup.((R^tstsubclass((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt
new_stringlscCstd��dS(Ns4BeautifulSoup objects don't support insert_before().(tNotImplementedError(R^t	successor((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt
insert_beforepscCstd��dS(Ns3BeautifulSoup objects don't support insert_after().(R�(R^R�((s0/usr/lib/python2.7/site-packages/bs4/__init__.pytinsert_aftersscCs^|jj�}|jr;||jdkr;|jj�n|jrW|jd|_n|jS(Ni����(R�R5R�R}(R^ttag((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR�vs	cCsk|jr|jjj|�n|jj|�|jd|_|j|jjkrg|jj|�ndS(Ni����(R}tcontentstappendR�R~RJtpreserve_whitespace_tagsR�(R^R�((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR�s	cCs�|jr�dj|j�}|js{t}x'|D]}||jkr1t}Pq1q1W|r{d|krod}qxd}q{ng|_|jr�t|j�dkr�|jj	s�|jj
|�r�dS||�}|j|�ndS(Nus
Rqi(R�R;R�tTruetASCII_SPACESRSR!R3R�ttexttsearchtobject_was_parsed(R^tcontainerClassR�t
strippabletito((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR|�s&		
		
c	Cs�|p|j}|p|j}d}}}t|t�rk|j}|j}|j}|sk|j}qkn|j	|||||�||_|j
j|�|jr�t|j
�d}xG|dkr�|j
||kr�Pn|d8}q�Wt
d||f��|dkr|}d}n|j
|d}}|t|j
�dkr^|j}d}n|j
|d}}||_|r�||_n||_|r�||_n||_|r�||_n||_|r�||_q�ndS(s Add an object to the parse tree.iis[Error building tree: supposedly %r was inserted into %r after the fact, but I don't see it!N(R}t_most_recent_elementRR1Rtnext_elementtnext_siblingtprevious_siblingtprevious_elementtsetupR�R�R3RA(	R^R�tparenttmost_recent_elementR�R�R�R�tindex((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR��sR												cCs�||jkrdSd}t|j�}xnt|ddd�D]V}|j|}||jkr�||jkr�|r�|j�}nPn|j�}q?W|S(s�Pops the tag stack up to and including the most recent
        instance of the given tag. If inclusivePop is false, pops the tag
        stack up to but *not* including the most recent instqance of
        the given tag.Niii����(RRR3R�trangeR~RtR�(R^R~R�tinclusivePoptmost_recently_poppedt
stack_sizeR�tt((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt	_popToTag�s
c	Cs�|j�|jrNt|j�dkrN|jjsJ|jj||�rNdSt||j|||||j	|j
�}|dkr�|S|j
r�||j
_n||_
|j|�|S(sPush a start tag on to the stack.

        If this method returns None, the tag was rejected by the
        SoupStrainer. You should proceed as if the tag had not occurred
        in the document. For instance, if this was a self-closing tag,
        don't call handle_endtag.
        iN(
R|R!R3R�R�t
search_tagRRRJR}R�R�R�(R^R~R�R�R�R�((s0/usr/lib/python2.7/site-packages/bs4/__init__.pythandle_starttag�s

		
cCs|j�|j||�dS(N(R|R�(R^R~R�((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt
handle_endtags
cCs|jj|�dS(N(R�R�(R^tdata((s0/usr/lib/python2.7/site-packages/bs4/__init__.pythandle_datastminimalcCsp|jr5d}|dkr(d|}nd|}nd}|sJd}nd}|tt|�j|||�S(slReturns a string or Unicode representation of this document.
        To get Unicode, pass None for encoding.Rs encoding="%s"u<?xml version="1.0"%s?>
uiN(R>RtsuperRRw(R^tpretty_printteventual_encodingt	formattert
encoding_partRttindent_level((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyRws	

	N( t__name__t
__module__t__doc__RR8R�RIRRjRmRptstaticmethodRVR]R\R�R
R�R�R�R�R�R|R�R�R�R�R�R�RSRRw(((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR8s6	�			
	
						9		tBeautifulStoneSoupcBseZdZd�ZRS(s&Deprecated interface to an XML parser.cOs4d|d<tjd�tt|�j||�dS(NtxmlR_sxThe BeautifulStoneSoup class is deprecated. Instead of using it, pass features="xml" into the BeautifulSoup constructor.(RRR�R�Rj(R^targsR((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyRj5s
(R�R�R�Rj(((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR�2stStopParsingcBseZRS((R�R�(((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR�=sR:cBseZRS((R�R�(((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyR:@st__main__((R�t
__author__t__version__t
__copyright__t__license__t__all__ROtreR?t	tracebackRRJRRtdammitRtelementRRRRR	R
RRR
RRRt_st_soupR�RUR�RAR:R�tstdinRLtprettify(((s0/usr/lib/python2.7/site-packages/bs4/__init__.pyt<module>s2	L
��