Beautiful Soup (HTML parser)

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Beautiful Soup
Original authorLeonard Richardson
Initial release2004 (2004)
Repository
  • {{URL|example.com|optional display text}}Lua error in Module:EditAtWikidata at line 29: attempt to index field 'wikibase' (a nil value).
Written inPython
Engine
    Lua error in Module:EditAtWikidata at line 29: attempt to index field 'wikibase' (a nil value).
    PlatformPython
    TypeHTML parser library, Web scraping
    License
    Websitewww.crummy.com/software/BeautifulSoup/

    Beautiful Soup is a Python package for parsing HTML and XML documents, including those with malformed markup. It creates a parse tree for documents that can be used to extract data from HTML,[2] which is useful for web scraping.[1][3]

    History

    [edit | edit source]

    Beautiful Soup was started in 2004 by Leonard Richardson.[citation needed] It takes its name from the poem Beautiful Soup from Alice's Adventures in Wonderland[4] and is a reference to the term "tag soup" meaning poorly-structured HTML code.[5] Richardson continues to contribute to the project,[6] which is additionally supported by paid open-source maintainers from the company Tidelift.[7]

    Versions

    [edit | edit source]

    Beautiful Soup 3 was the official release line of Beautiful Soup from May 2006 to March 2012. The current release is Beautiful Soup 4.x.

    In 2021, Python 2.7 support was retired and the release 4.9.3 was the last to support Python 2.7.[8]

    Usage

    [edit | edit source]

    Beautiful Soup represents parsed data as a tree which can be searched and iterated over with ordinary Python loops.[9]

    Code example

    [edit | edit source]

    The example below uses the Python standard library's urllib[10] to load Wikipedia's main page, then uses Beautiful Soup to parse the document and search for all links within.

    #!/usr/bin/env python3
    # Anchor extraction from HTML document
    from bs4 import BeautifulSoup
    from urllib.request import urlopen
    
    with urlopen("https://en.wikipedia.org/wiki/Main_Page") as response:
        soup = BeautifulSoup(response, "html.parser")
        for anchor in soup.find_all("a"):
            print(anchor.get("href", "/"))
    

    Another example is using the Python requests library[11] to get divs on a URL.

    import requests
    from bs4 import BeautifulSoup
    
    url = "https://wikipedia.com"
    response = requests.get(url)
    soup = BeautifulSoup(response.text, "html.parser")
    headings = soup.find_all("div")
    
    for heading in headings:
        print(heading.text.strip())
    

    See also

    [edit | edit source]

    References

    [edit | edit source]
    1. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    2. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    3. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    4. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    5. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    6. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    7. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    8. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    9. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    10. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
    11. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).