Browser Automation - Selenium

Browser

About

Selenium is a suite of tools to automate web browsers across many platforms.

Tools

Web Drivers

At the core of Selenium is WebDriver, it's a code interface and implementation to manipulate and perform automation actions in browsers.

IDE

The ide permits to create browser automation script from a GUI and

  • execute them
  • or export them as code (java, python,…)

See also: https://intuit.github.io/karate/

Grid

To run tests on a grid of machines and manage multiple environments from a central point, making it easy to run the tests against a vast combination of browsers/OS

Documentation / Reference





Discover More
Browser
BrowserStack

is a cloud-based cross-browser testing tool that enables developers to test their websites across various browsers on different operating systems and mobile devices. That...
Testing Infrastructure
Cucumber

is a Behavior Driven Design (BDD) framework that takes “stories” or scenarios written in human readable languages such as English and turns those human readable text into a software test. This makes...
Browser
Headless Chrome

is a way to run the Chrome browser in a headless mode (ie without the UI, you don't see the screen, it's a server mode) The Chrome Debugging Protocol is an API that permits to control Chrome (or any...
Browser
Headless browser - WebDriver / ChromeDriver (automated testing - W3C WebDriver API )

A WebDriver is a application: that control / drive real browsers. via function that are available via the the WebDriver API Each browser is backed by a specific WebDriver implementation, called...
Browser
Web - Headless browser (Test automation)

A headless browser is an application/library that emulates a web browser but without a graphical user interface ie (without DOM / without the Web api) They are the basis to build a web bot. Build...
Chrome Node Screenshot
Web Page ScreenShot

How to take a screenshot of a web page
Robots Useragent
Web Robot - Crawler

A web crawler is an crawler application that reads web resources (mostly a web page) and parse them to extract meaningful information. A crawl cycle consists of 4 steps: Selects the urls to fetch...



Share this page:
Follow us:
Task Runner