From f215e02bf85f68d3a6106c2a1f4f7f063f819064 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Thu, 11 Apr 2024 10:17:27 +0200 Subject: Adding upstream version 7.0.14-dfsg. Signed-off-by: Daniel Baumann --- .../ValidationKit/docs/AutomaticTestingRevamp.html | 1354 ++++++++++++++++++++ .../ValidationKit/docs/AutomaticTestingRevamp.txt | 1061 +++++++++++++++ src/VBox/ValidationKit/docs/Makefile.kmk | 69 + src/VBox/ValidationKit/docs/TestBoxImaging.html | 758 +++++++++++ src/VBox/ValidationKit/docs/TestBoxImaging.txt | 368 ++++++ .../docs/VBoxAudioValidationKitReadMe.html | 601 +++++++++ .../docs/VBoxAudioValidationKitReadMe.txt | 207 +++ .../docs/VBoxValidationKitReadMe.html | 467 +++++++ .../ValidationKit/docs/VBoxValidationKitReadMe.txt | 113 ++ src/VBox/ValidationKit/docs/WindbgPython.txt | 10 + src/VBox/ValidationKit/docs/testbox-maintenance.sh | 409 ++++++ src/VBox/ValidationKit/docs/testbox-pxe-conf.sh | 162 +++ src/VBox/ValidationKit/docs/valkit.txt | 1 + 13 files changed, 5580 insertions(+) create mode 100644 src/VBox/ValidationKit/docs/AutomaticTestingRevamp.html create mode 100644 src/VBox/ValidationKit/docs/AutomaticTestingRevamp.txt create mode 100644 src/VBox/ValidationKit/docs/Makefile.kmk create mode 100644 src/VBox/ValidationKit/docs/TestBoxImaging.html create mode 100644 src/VBox/ValidationKit/docs/TestBoxImaging.txt create mode 100644 src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.html create mode 100644 src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.txt create mode 100644 src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.html create mode 100644 src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.txt create mode 100644 src/VBox/ValidationKit/docs/WindbgPython.txt create mode 100755 src/VBox/ValidationKit/docs/testbox-maintenance.sh create mode 100755 src/VBox/ValidationKit/docs/testbox-pxe-conf.sh create mode 100644 src/VBox/ValidationKit/docs/valkit.txt (limited to 'src/VBox/ValidationKit/docs') diff --git a/src/VBox/ValidationKit/docs/AutomaticTestingRevamp.html b/src/VBox/ValidationKit/docs/AutomaticTestingRevamp.html new file mode 100644 index 00000000..f5501717 --- /dev/null +++ b/src/VBox/ValidationKit/docs/AutomaticTestingRevamp.html @@ -0,0 +1,1354 @@ + + + + + + +AutomaticTestingRevamp.txt + + + +
+ + +
+

Revamp of Automatic VirtualBox Testing

+
+

Introduction

+

This is the design document for a revamped automatic testing framework. +The revamp aims at replacing the current tinderbox based testing by a new +system that is written from scratch.

+

The old system is not easy to work with and was never meant to be used for +managing tests, after all it just a simple a build manager tailored for +contiguous building. Modifying the existing tinderbox system to do what +we want would require fundamental changes that would render it useless as +a build manager, it would therefore end up as a fork. The amount of work +required would probably be about the same as writing a new system from +scratch. Other considerations, such as the license of the tinderbox +system (MPL) and language it is realized in (Perl), are also in favor of +doing it from scratch.

+

The language envisioned for the new automatic testing framework is Python. This +is for several reasons:

+
+
    +
  • The VirtualBox API has Python bindings.
  • +
  • Python is used quite a bit inside Sun (dunno about Oracle).
  • +
  • Works relatively well with Apache for the server side bits.
  • +
  • It is more difficult to produce write-only code in Python (alias the +we-don't-like-perl argument).
  • +
  • You don't need to compile stuff.
  • +
+
+

Note that the author of this document has no special training as a test +engineer and may therefore be using the wrong terms here and there. The +primary focus is to express what we need to do in order to improve +testing.

+

This document is written in reStructuredText (rst) which just happens to +be used by Python, the primary language for this revamp. For more +information on reStructuredText: http://docutils.sourceforge.net/rst.html

+
+
+
+

Definitions / Glossary

+
+
sub-test driver
+
A set of test cases that can be used by more than one test driver. Could +also be called a test unit, in the pascal sense of unit, if it wasn't so +easily confused with 'unit test'.
+
test
+
This is somewhat ambiguous and this document try avoid using it where +possible. When used it normally refers to doing testing by executing one or +more testcases.
+
test case
+
A set of inputs, test programs and expected results. It validates system +requirements and generates a pass or failed status. A basic unit of testing. +Note that we use the term in a rather broad sense.
+
test driver
+
A program/script used to execute a test. Also known as a test harness. +Generally abbreviated 'td'. It can have sub-test drivers.
+
test manager
+
Software managing the automatic testing. This is a web application that runs +on a dedicated server (tindertux).
+
test set
+
The output of testing activity. Logs, results, ++. Our usage of this should +probably be renamed to 'test run'.
+
test group
+
A collection of related test cases.
+
testbox
+
A computer that does testing.
+
testbox script
+
Script executing orders from the test manager on a testbox. Started +automatically upon bootup.
+
testing
+
todo
+
TODO: Check that we've got all this right and make them more exact
+
where possible.
+
+

See also http://encyclopedia2.thefreedictionary.com/testing%20types +and http://www.aptest.com/glossary.html .

+
+
+

Objectives

+
+
    +
  • A scalable test manager (>200 testboxes).
  • +
  • Optimize the web user interface (WUI) for typical workflows and analysis.
  • +
  • Efficient and flexibile test configuration.
  • +
  • Import test result from other test systems (logo testing, VDI, ++).
  • +
  • Easy to add lots of new testscripts.
  • +
  • Run tests locally without a manager.
  • +
  • Revamp a bit at the time.
  • +
+
+
+
+

The Testbox Side

+

Each testbox has a unique name corresponding to its DNS zone entry. When booted +a testbox script is started automatically. This script will query the test +manager for orders and execute them. The core order downloads and executes a +test driver with parameters (configuration) from the server. The test driver +does all the necessary work for executing the test. In a typical VirtualBox +test this means picking a build, installing it, configuring VMs, running the +test VMs, collecting the results, submitting them to the server, and finally +cleaning up afterwards.

+

The testbox environment which the test drivers are executed in will have a +number of environment variables for determining location of the source images +and other test data, scratch space, test set id, server URL, and so on and so +forth.

+

On startup, the testbox script will look for crash dumps and similar on +systems where this is possible. If any sign of a crash is found, it will +put any dumps and reports in the upload directory and inform the test +manager before reporting for duty. In order to generate the proper file +names and report the crash in the right test set as well as prevent +reporting crashes unrelated to automatic testing, the testbox script will +keep information (test set id, ++) in a separate scratch directory +(${TESTBOX_PATH_SCRATCH}/../testbox) and make sure it is synced to the +disk (both files and directories).

+

After checking for crashes, the testbox script will clean up any previous test +which might be around. This involves first invoking the test script in cleanup +mode and the wiping the scratch space.

+

When reporting for duty the script will submit information about the host: OS +name, OS version, OS bitness, CPU vendor, total number of cores, VT-x support, +AMD-V support, amount of memory, amount of scratch space, and anything else that +can be found useful for scheduling tests or filtering test configurations.

+
+

Testbox Script Orders

+

The orders are kept in a queue on the server and the testbox script will fetch +them one by one. Orders that cannot be executed at the moment will be masked in +the query from the testbox.

+
+
Execute Test Driver
+
Downloads and executes the a specified test driver with the given +configuration (arguments). Only one test driver can be executed at a time. +The server can specify more than one ZIP file to be downloaded and unpacked +before executing the test driver. The testbox script may cache these zip +files using http time stamping.
+
Abort Test Driver
+
Aborts the current test driver. This will drop a hint to the driver and give +it 60 seconds to shut down the normal way. If that fails, the testbox script +will kill the driver processes (SIGKILL or equivalent), invoke the +testdriver in cleanup mode, and finally wipe the scratch area. Should either +of the last two steps fail in some way, the testbox will be rebooted.
+
Idle
+
Ask again in X seconds, where X is specified by the server.
+
Reboot
+
Reboot the testbox. If a test driver is current running, an attempt at +aborting it (Abort Test Driver) will be made first.
+
Update
+
Updates the testbox script. The order includes a server relative path to the +new testbox script. This can only be executed when no test driver is +currently being executed.
+
+
+
+

Testbox Environment: Variables

+
+
COMSPEC
+
This will be set to C:WindowsSystem32cmd.exe on Windows.
+
PATH
+
This will contain the kBuild binary directory for the host platform.
+
SHELL
+
This will be set to point to kmk_ash(.exe) on all platforms.
+
TESTBOX_NAME
+
The testbox name. +This is not required by the local reporter.
+
TESTBOX_PATH_BUILDS
+
The absolute path to where the build repository can be found. This should be +a read only mount when possible.
+
TESTBOX_PATH_RESOURCES
+
The absolute path to where static test resources like ISOs and VDIs can be +found. The test drivers knows the layout of this. This should be a read only +mount when possible.
+
TESTBOX_PATH_SCRATCH
+
The absolute path to the scratch space. This is the current directory when +starting the test driver. It will be wiped automatically after executing the +test. +(Envisioned as ${TESTBOX_PATH_SCRIPTS}/../scratch and that +${TESTBOX_PATH_SCRATCH}/ will be automatically wiped by the testbox script.)
+
TESTBOX_PATH_SCRIPTS
+
The absolute path to the test driver and the other files that was unzipped +together with it. This is also where the test-driver-abort file will be put. +(Envisioned as ${TESTBOX_PATH_SCRATCH}/../driver, see above.)
+
TESTBOX_PATH_UPLOAD
+
The absolute path to the upload directory for the testbox. This is for +putting VOBs, PNGs, core dumps, crash dumps, and such on. The files should be +bzipped or zipped if they aren't compress already. The names should contain +the testbox and test set ID.
+
TESTBOX_REPORTER
+
The name of the test reporter back end. If not present, it will default to +the local reporter.
+
TESTBOX_TEST_SET_ID
+
The test set ID if we're running. +This is not required by the local reporter.
+
TESTBOX_MANAGER_URL
+
The URL to the test manager. +This is not required by the local reporter.
+
TESTBOX_XYZ
+
There will probably be some more of these.
+
+
+
+

Testbox Environment: Core Utilities

+

The testbox will not provide the typical unix /bin and /usr/bin utilities. In +other words, cygwin will not be used on Windows!

+

The testbox will provide the unixy utilities that ships with kBuild and possibly +some additional ones from tools/./bin in the VirtualBox tree (wget, unzip, +zip, and so on). The test drivers will avoid invoking any of these utilities +directly and instead rely on generic utility methods in the test driver +framework. That way we can more easily reimplement the functionality of the +core utilities and drop the dependency on them. It also allows us to quickly +work around platform specific oddities and bugs.

+
+
+

Test Drivers

+

The test drivers are programs that will do the actual testing. In addition to +run under the testbox script, they can be executed in the VirtualBox development +environment. This is important for bug analysis and for simplifying local +testing by the developers before committing changes. It also means the test +drivers can be developed locally in the VirtualBox development environment.

+

The main difference between executing a driver under the testbox script and +running it manually is that there is no test manager in the latter case. The +test result reporter will not talk to the server, but report things to a local +log file and/or standard out/err. When invoked manually, all the necessary +arguments will need to be specified by hand of course - it should be possible +to extract them from a test set as well.

+

For the early implementation stages, an implementation of the reporter interface +that talks to the tinderbox base test manager will be needed. This will be +dropped later on when a new test manager is ready.

+

As hinted at in other sections, there will be a common framework +(libraries/packages/classes) for taking care of the tedious bits that every +test driver needs to do. Sharing code is essential to easing test driver +development as well as reducing their complexity. The framework will contain:

+
+
    +
  • A generic way of submitting output. This will be a generic interface with +multiple implementation, the TESTBOX_REPORTER environment variable +will decide which of them to use. The interface will have very specific +methods to allow the reporter to do a best possible job in reporting the +results to the test manager.
  • +
  • +
    Helpers for typical tasks, like:
    +
      +
    • Copying files.
    • +
    • Deleting files, directory trees and scratch space.
    • +
    • Unzipping files.
    • +
    • Creating ISOs
    • +
    • And such things.
    • +
    +
    +
    +
  • +
  • Helpers for installing and uninstalling VirtualBox.
  • +
  • Helpers for defining VMs. (The VBox API where available.)
  • +
  • Helpers for controlling VMs. (The VBox API where available.)
  • +
+
+

The VirtualBox bits will be separate from the more generic ones, simply because +this is cleaner it will allow us to reuse the system for testing other products.

+

The framework will be packaged in a zip file other than the test driver so we +don't waste time and space downloading the same common code.

+

The test driver will poll for the file +${TESTBOX_PATH_SCRIPTS}/test-driver-abort and abort all testing when it sees it.

+

The test driver can be invoked in three modes: execute, help and cleanup. The +default is execute mode, the help shows an configuration summary and the cleanup +is for cleaning up after a reboot or aborted run. The latter is done by the +testbox script on startup and after abort - the driver is expected to clean up +by itself after a normal run.

+
+
+
+

The Server Side

+

The server side will be implemented using a webserver (apache), a database +(postgres) and cgi scripts (Python). In addition a cron job (Python) running +once a minute will generate static html for frequently used pages and maybe +execute some other tasks for driving the testing forwards. The order queries +from the testbox script is the primary driving force in the system. The total +makes up the test manager.

+

The test manager can be split up into three rough parts:

+
+
    +
  • Configuration (of tests, testgroups and testboxes).
  • +
  • Execution (of tests, collecting and organizing the output).
  • +
  • Analysis (of test output, mostly about presentation).
  • +
+
+
+
+

Test Manager: Requirements

+

List of requirements:

+
+
    +
  • Two level testing - L1 quick smoke tests and L2 longer tests performed on +builds passing L1. (Klaus (IIRC) meant this could be realized using +test dependency.)
  • +
  • Black listing builds (by revision or similar) known to be bad.
  • +
  • Distinguish between build types so we can do a portion of the testing with +strict builds.
  • +
  • Easy to re-configure build source for testing different branch or for +testing a release candidate. (Directory based is fine.)
  • +
  • Useful to be able to partition testboxes (run specific builds on some +boxes, let an engineer have a few boxes for a while).
  • +
  • Interaction with ILOM/...: reset systems.
  • +
  • Be able to suspend testing on selected testboxes when doing maintenance +(where automatically resuming testing on reboot is undesired) or similar +activity.
  • +
  • Abort testing on selected testboxes.
  • +
  • Scheduling of tests requiring more than one testbox.
  • +
  • Scheduling of tests that cannot be executing concurrently on several +machines because of some global resource like an iSCSI target.
  • +
  • Jump the scheduling queue. Scheduling of specified test the next time a +testbox is available (optionally specifying which testbox to schedule it +on).
  • +
  • +
    Configure tests with variable configuration to get better coverage. Two modes:
    +
      +
    • TM generates the permutations based on one or more sets of test script arguments.
    • +
    • Each configuration permutation is specified manually.
    • +
    +
    +
    +
  • +
  • Test specification needs to be flexible (select tests, disable test, test +scheduling (run certain tests nightly), ... ).
  • +
  • Test scheduling by hour+weekday and by priority.
  • +
  • Test dependencies (test A depends on test B being successful).
  • +
  • Historize all configuration data, in particular test configs (permutations +included) and testboxes.
  • +
  • Test sets has at a minimum a build reference, a testbox reference and a +primary log associated with it.
  • +
  • +
    Test sets stores further result as a recursive collection of:
    +
      +
    • hierarchical subtest name (slash sep)
    • +
    • test parameters / config
    • +
    • bool fail/succ
    • +
    • attributes (typed?)
    • +
    • test time
    • +
    • e.g. throughput
    • +
    • subresults
    • +
    • log
    • +
    • screenshots, video,...
    • +
    +
    +
    +
  • +
  • The test sets database structure needs to designed such that data mining +can be done in an efficient manner.
  • +
  • Presentation/analysis: graphs!, categorize bugs, columns reorganizing +grouped by test (hierarchical), overviews, result for last day.
  • +
+
+
+
+

Test Manager: Configuration

+
+

Testboxes

+

Configuration of testboxes doesn't involve much work normally. A testbox +is added manually to the test manager by entering the DNS entry and/or IP +address (the test manager resolves the missing one when necessary) as well as +the system UUID (when obtainable - should be displayed by the testbox script +installer). Queries from unregistered testboxes will be declined as a kind of +security measure, the incident should be logged in the webserver log if +possible. In later dealings with the client the System UUID will be the key +identifier. It's permittable for the IP address to change when the testbox +isn't online, but not while testing (just imagine live migration tests and +network tests). Ideally, the testboxes should not change IP address.

+

The testbox edit function must allow changing the name and system UUID.

+

One further idea for the testbox configuration is indicating what they are +capable of to filter out tests and test configurations that won't work on that +testbox. To examplify this take the ACP2 installation test. If the test +manager does not make sure the testbox have VT-x or AMD-v capabilities, the test +is surely going to fail. Other testbox capabilities would be total number of +CPU cores, memory size, scratch space. These testbox capabilities should be +collected automatically on bootup by the testbox script together with OS name, +OS version and OS bitness.

+

A final thought, instead of outright declining all requests from new testboxes, +we could record the unregistered testboxes with ip, UUID, name, os info and +capabilities but mark them as inactive. The test operator can then activate +them on an activation page or edit the testbox or something.

+
+
+

Testcases

+

We use the term testcase for a test.

+
+
+

Testgroups

+

Testcases are organized into groups. A testcase can be member of more than one +group. The testcase gets a priority assigned to it in connection with the +group membership.

+

Testgroups are picked up by a testbox partition (aka scheduling group) and a +prioirty, scheduling time restriction and dependencies on other test groups are +associated with the assignment. A testgroup can be used by several testbox +partitions.

+

(This used to be called 'testsuites' but was renamed to avoid confusion with +the VBox Test Suite.)

+
+
+

Scheduling

+

The initial scheduler will be modelled after what we're doing already on in the +tinderbox driven testing. It's best described as a best effort continuous +integration scheduler. Meaning, it will always use the latest build suitable +for a testcase. It will schedule on a testcase level, using the combined +priority of the testcase in the test group and the test group with the testbox +partition, trying to spread the test case argument variation out accordingly +over the whole scheduilng queue. Which argument variation to start with, is +not undefined (random would be best).

+

Later, we may add other schedulers as needed.

+
+
+
+

The Test Manager Database

+

First a general warning:

+
+The guys working on this design are not database experts, web +programming experts or similar, rather we are low level guys +who's main job is x86 & AMD64 virtualization. So, please don't +be too hard on us. :-)
+

A logical table layout can be found in TestManagerDatabaseMap.png (created by +Oracle SQL Data Modeler, stored in TestManagerDatabase.dmd). The physical +database layout can be found in TestManagerDatabaseInit.pgsql postgreSQL +script. The script is commented.

+
+

Data History

+

We need to somehow track configuration changes over time. We also need to +be able to query the exact configuration a test set was run with so we can +understand and make better use of the results.

+

There are different techniques for archiving this, one is tuple-versioning +( http://en.wikipedia.org/wiki/Tuple-versioning ), another is log trigger +( http://en.wikipedia.org/wiki/Log_trigger ). We use tuple-versioning in +this database, with 'effective' as start date field name and 'expire' as +the end (exclusive).

+

Tuple-versioning has a shortcoming wrt to keys, both primary and foreign. +The primary key of a table employing tuple-versioning is really +'id' + 'valid_period', where the latter is expressed using two fields +([effective...expire-1]). Only, how do you tell the database engine that +it should not allow overlapping valid_periods? Useful suggestions are +welcomed. :-)

+

Foreign key references to a table using tuple-versioning is running into +trouble because of the time axis and that to our knowledge foreign keys +must reference exactly one row in the other table. When time is involved +what we wish to tell the database is that at any given time, there actually +is exactly one row we want to match in the other table, only we've no idea +how to express this. So, many foreign keys are not expressed in SQL of this +database.

+

In some cases, we extend the tuple-versioning with a generation ID so that +normal foreign key referencing can be used. We only use this for recording +(references in testset) and scheduling (schedqueue), as using it more widely +would force updates (gen_id changes) to propagate into all related tables.

+
+
See also:
+
+
+
+
+
+
+

Test Manager: Execution

+
+
+

Test Manager: Scenarios

+
+

#1 - Testbox Signs On (At Bootup)

+
+
The testbox supplies a number of inputs when reporting for duty:
+
    +
  • IP address.
  • +
  • System UUID.
  • +
  • OS name.
  • +
  • OS version.
  • +
  • CPU architecture.
  • +
  • CPU count (= threads).
  • +
  • CPU VT-x/AMD-V capability.
  • +
  • CPU nested paging capability.
  • +
  • Chipset I/O MMU capability.
  • +
  • Memory size.
  • +
  • Scratch size space (for testing).
  • +
  • Testbox Script revision.
  • +
+
+
Results:
+
    +
  • ACK or NACK.
  • +
  • Testbox ID and name on ACK.
  • +
+
+
+

After receiving a ACK the testbox will ask for work to do, i.e. continue with +scenario #2. In the NACK case, it will sleep for 60 seconds and try again.

+

Actions:

+
    +
  1. Validate the testbox by looking the UUID up in the TestBoxes table. +If not found, NACK the request. SQL:

    +
    +SELECT  idTestBox, sName
    +FROM    TestBoxes
    +WHERE   uuidSystem = :sUuid
    +  AND   tsExpire = 'infinity'::timestamp;
    +
    +
  2. +
  3. Check if any of the information by testbox script has changed. The two +sizes are normalized first, memory size rounded to nearest 4 MB and scratch +space is rounded down to nearest 64 MB. If anything changed, insert a new +row in the testbox table and historize the current one, i.e. set +OLD.tsExpire to NEW.tsEffective and get a new value for NEW.idGenTestBox.

    +
  4. +
  5. +
    Check with TestBoxStatuses:
    +
      +
    1. If there is an row for the testbox in it already clean up change it +to 'idle' state and deal with any open testset like described in +scenario #9.
    2. +
    3. If there is no row, add one with 'idle' state.
    4. +
    +
    +
    +
  6. +
  7. ACK the request and pass back the idTestBox.

    +
  8. +
+
+
Note! Testbox.enabled is not checked here, that is only relevant when it asks
+
for a new task (scenario #2 and #5).
+
Note! Should the testbox script detect changes in any of the inputs, it should
+
redo the sign in.
+
Note! In scenario #8, the box will not sign on until it has done the reboot and
+
cleanup reporting!
+
+
+
+

#2 - Testbox Asks For Work To Do

+
+
Inputs:
+
    +
  • The testbox is supplying its IP indirectly.
  • +
  • The testbox should supply its UUID and ID directly.
  • +
+
+
Results:
+
    +
  • IDLE, WAIT, EXEC, REBOOT, UPGRADE, UPGRADE-AND-REBOOT, SPECIAL or DEAD.
  • +
+
+
+

Actions:

+
    +
  1. Validate the ID and IP by selecting the currently valid testbox row:

    +
    +SELECT  idGenTestBox, fEnabled, idSchedGroup, enmPendingCmd
    +FROM    TestBoxes
    +WHERE   id = :id
    +  AND   uuidSystem = :sUuid
    +  AND   ip = :ip
    +  AND   tsExpire = 'infinity'::timestamp;
    +
    +

    If NOT found return DEAD to the testbox client (it will go back to sign on +mode and retry every 60 seconds or so - see scenario #1).

    +
    +
    Note! The WUI will do all necessary clean-ups when deleting a testbox, so
    +

    contrary to the initial plans, we don't need to do anything more for +the DEAD status.

    +
    +
    +
  2. +
  3. Check with TestBoxStatuses (maybe joined with query from 1).

    +

    If enmState is 'gang-gathering': Goto scenario #6 on timeout or pending +'abort' or 'reboot' command. Otherwise, tell the testbox to WAIT [done].

    +

    If enmState is 'gang-testing': The gang has been gathered and execution +has been triggered. Goto 5.

    +

    If enmState is not 'idle', change it to 'idle'.

    +

    If idTestSet is not NULL, CALL scenario #9 to it up.

    +

    If there is a pending abort command, remove it.

    +

    If there is a pending command and the old state doesn't indicate that it was +being executed, GOTO scenario #3.

    +
    +
    Note! There should be a TestBoxStatuses row after executing scenario #1,
    +

    however should none be found for some funky reason, returning DEAD +will fix the problem (see above)

    +
    +
    +
  4. +
  5. If the testbox was marked as disabled, respond with an IDLE command to the +testbox [done]. (Note! Must do this after TestBoxStatuses maintenance from +point 2, or abandoned tests won't be cleaned up after a testbox is disabled.)

    +
  6. +
  7. Consider testcases in the scheduling queue, pick the first one which the +testbox can execute. There is a concurrency issue here, so we put and +exclusive lock on the SchedQueues table while considering its content.

    +

    The cursor we open looks something like this:

    +
    +SELECT  idItem, idGenTestCaseArgs,
    +        idTestSetGangLeader, cMissingGangMembers
    +FROM    SchedQueues
    +WHERE   idSchedGroup = :idSchedGroup
    +   AND  (   bmHourlySchedule is NULL
    +         OR get_bit(bmHourlySchedule, :iHourOfWeek) = 1 ) --< does this work?
    +ORDER BY ASC idItem;
    +
    +
  8. +
+
+

If there no rows are returned (this can happen because no testgroups are +associated with this scheduling group, the scheduling group is disabled, +or because the queue is being regenerated), we will tell the testbox to +IDLE [done].

+
+
For each returned row we will:
+
    +
  1. Check testcase/group dependencies.

    +
  2. +
  3. Select a build (and default testsuite) satisfying the dependencies.

    +
  4. +
  5. Check the testcase requirements with that build in mind.

    +
  6. +
  7. If idTestSetGangLeader is NULL, try allocate the necessary resources.

    +
  8. +
  9. If it didn't check out, fetch the next row and redo from (a).

    +
  10. +
  11. Tentatively create a new test set row.

    +
  12. +
  13. +
    If not gang scheduling:
    +
      +
    • Next state: 'testing'
    • +
    +
    +
    ElIf we're the last gang participant:
    +
      +
    • Set idTestSetGangLeader to NULL.
    • +
    • Set cMissingGangMembers to 0.
    • +
    • Next state: 'gang-testing'
    • +
    +
    +
    ElIf we're the first gang member:
    +
      +
    • Set cMissingGangMembers to TestCaseArgs.cGangMembers - 1.
    • +
    • Set idTestSetGangLeader to our idTestSet.
    • +
    • Next state: 'gang-gathering'
    • +
    +
    +
    Else:
    +
      +
    • Decrement cMissingGangMembers.
    • +
    • Next state: 'gang-gathering'
    • +
    +
    +
    If we're not gang scheduling OR cMissingGangMembers is 0:
    +

    Move the scheduler queue entry to the end of the queue.

    +
    +
    +

    Update our TestBoxStatuses row with the new state and test set. +COMMIT;

    +
  14. +
+
+
+
+
    +
  1. +
    If state is 'testing' or 'gang-testing':
    +

    EXEC reponse.

    +

    The EXEC response for a gang scheduled testcase includes a number of +extra arguments so that the script knows the position of the testbox +it is running on and of the other members. This means the that the +TestSet.iGangMemberNo is passed using --gang-member-no and the IP +addresses of the all gang members using --gang-ipv4-<memb-no> <ip>.

    +
    +
    Else (state is 'gang-gathering'):
    +

    WAIT

    +
    +
    +
  2. +
+
+
+

#3 - Pending Command When Testbox Asks For Work

+

This is a subfunction of scenario #2 and #5.

+

As seen in scenario #2, the testbox will send 'abort' commands to /dev/null +when it finds one when not executing a test. This includes when it reports +that the test has completed (no need to abort a completed test, wasting lot +of effort when standing at the finish line).

+

The other commands, though, are passed back to the testbox. The testbox +script will respond with an ACK or NACK as it sees fit. If NACKed, the +pending command will be removed (pending_cmd set to none) and that's it. +If ACKed, the state of the testbox will change to that appropriate for the +command and the pending_cmd set to none. Should the testbox script fail to +respond, the command will be repeated the next time it asks for work.

+
+
+

#4 - Testbox Uploads Results During Test

+

TODO

+
+
+

#5 - Testbox Completes Test and Asks For Work

+

This is very similar to scenario #2

+

TODO

+
+
+

#6 - Gang Gathering Timeout

+

This is a subfunction of scenario #2.

+

When gathering a gang of testboxes for a testcase, we do not want to wait +forever and have testboxes doing nothing for hours while waiting for partners. +So, the gathering has a reasonable timeout (imagine something like 20-30 mins).

+

Also, we need some way of dealing with 'abort' and 'reboot' commands being +issued while waiting. The easy way out is pretend it's a time out.

+

When changing the status to 'gang-timeout' we have to be careful. First of all, +we need to exclusively lock the SchedQueues and TestBoxStatuses (in that order) +and re-query our status. If it changed redo the checks in scenario #2 point 2.

+

If we still want to timeout/abort, change the state from 'gang-gathering' to +'gang-gathering-timedout' on all the gang members that has gathered so far. +Then reset the scheduling queue record and move it to the end of the queue.

+

When acting on 'gang-timeout' the TM will fail the testset in a manner similar +to scenario #9. No need to repeat that.

+
+
+

#7 - Gang Cleanup

+

When a testbox completes a gang scheduled test, we will have to serialize +resource cleanup (both globally and on testboxes) as they stop. More details +can be found in the documentation of 'gang-cleanup'.

+

So, the transition from 'gang-testing' is always to 'gang-cleanup'. When we +can safely leave 'gang-cleanup' is decided by the query:

+
+SELECT  COUNT(*)
+FROM    TestBoxStatuses,
+        TestSets
+WHERE   TestSets.idTestSetGangLeader = :idTestSetGangLeader
+    AND TestSets.idTestBox = TestBoxStatuses.idTestBox
+    AND TestBoxStatuses.enmState = 'gang-running'::TestBoxState_T;
+
+

As long as there are testboxes still running, we stay in the 'gang-cleanup' +state. Once there are none, we continue closing the testset and such.

+
+
+

#8 - Testbox Reports A Crash During Test Execution

+

TODO

+
+
+

#9 - Cleaning Up Abandoned Testcase

+

This is a subfunction of scenario #1 and #2. The actions taken are the same in +both situations. The precondition for taking this path is that the row in the +testboxstatus table is referring to a testset (i.e. testset_id is not NULL).

+

Actions:

+
    +
  1. +
    If the testset is incomplete, we need to completed:
    +
      +
    1. Add a message to the root TestResults row, creating one if necessary, +that explains that the test was abandoned. This is done +by inserting/finding the string into/in TestResultStrTab and adding +a row to TestResultMsgs with idStrMsg set to that string id and +enmLevel set to 'failure'.
    2. +
    3. Mark the testset as failed.
    4. +
    +
    +
    +
  2. +
  3. Free any global resources referenced by the test set. This is done by +deleting all rows in GlobalResourceStatuses matching the testbox id.
  4. +
  5. Set the idTestSet to NULL in the TestBoxStatuses row.
  6. +
+
+
+

#10 - Cleaning Up a Disabled/Dead TestBox

+

The UI needs to be able to clean up the remains of a testbox which for some +reason is out of action. Normal cleaning up of abandoned testcases requires +that the testbox signs on or asks for work, but if the testbox is dead or +in some way indisposed, it won't be doing any of that. So, the testbox +sheriff needs to have a way of cleaning up after it.

+

It's basically a manual scenario #9 but with some safe guards, like checking +that the box hasn't been active for the last 1-2 mins (max idle/wait time * 2).

+
+
Note! When disabling a box that still executing the testbox script, this
+
cleanup isn't necessary as it will happen automatically. Also, it's +probably desirable that the testbox finishes what ever it is doing first +before going dormant.
+
+
+
+
+

Test Manager: Analysis

+

One of the testbox sheriff's tasks is to try figure out the reason why something +failed. The test manager will provide facilities for doing so from very early +in it's implementation.

+

We need to work out some useful status reports for the early implementation. +Later there will be more advanced analysis tools, where for instance we can +create graphs from selected test result values or test execution times.

+
+
+

Implementation Plan

+

This has changed for various reasons. The current plan is to implement the +infrastructure (TM & testbox script) first and do a small deployment with the +2-5 test drivers in the Testsuite as basis. Once the bugs are worked out, we +will convert the rest of the tests and start adding new ones.

+

We just need to finally get this done, no point in doing it piecemeal by now!

+
+

Test Manager Implementation Sub-Tasks

+

The implementation of the test manager and adjusting/completing of the testbox +script and the test drivers are tasks which can be done by more than one +person. Splitting up the TM implementation into smaller tasks should allow +parallel development of different tasks and get us working code sooner.

+
+
+

Milestone #1

+

The goal is to getting the fundamental testmanager engine implemented, debugged +and working. With the exception of testboxes, the configuration will be done +via SQL inserts.

+

Tasks in somewhat prioritized order:

+
+
    +
  • Kick off test manager. It will live in testmanager/. Salvage as much as +possible from att/testserv. Create basic source and file layout.
  • +
  • Adjust the testbox script, part one. There currently is a testbox script +in att/testbox, this shall be moved up into testboxscript/. The script +needs to be adjusted according to the specification layed down earlier +in this document. Installers or installation scripts for all relevant +host OSes are required. Left for part two is result reporting beyond the +primary log. This task must be 100% feature complete, on all host OSes, +there is no room for FIXME, XXX or @todo here.
  • +
  • Implement the schedule queue generator.
  • +
  • Implement the testbox dispatcher in TM. Support all the testbox script +responses implemented above, including upgrading the testbox script.
  • +
  • Implement simple testbox management page.
  • +
  • Implement some basic activity and result reports so that we can see +what's going on.
  • +
  • Create a testmanager / testbox test setup. This lives in selftest/.
      +
    1. Set up something that runs, no fiddly bits. Debug till it works.
    2. +
    3. Create a setup that tests testgroup dependencies, i.e. real tests +depending on smoke tests.
    4. +
    5. Create a setup that exercises testcase dependency.
    6. +
    7. Create a setup that exercises global resource allocation.
    8. +
    9. Create a setup that exercises gang scheduling.
    10. +
    +
  • +
  • Check that all features work.
  • +
+
+
+
+

Milestone #2

+

The goal is getting to VBox testing.

+

Tasks in somewhat prioritized order:

+
+
    +
  • Implement full result reporting in the testbox script and testbox driver. +A testbox script specific reporter needs to be implemented for the +testdriver framework. The testbox script needs to forward the results to +the test manager, or alternatively the testdriver report can talk +directly to the TM.
  • +
  • Implement the test manager side of the test result reporting.
  • +
  • Extend the selftest with some setup that report all kinds of test +results.
  • +
  • Implement script/whatever feeding builds to the test manager from the +tinderboxes.
  • +
  • The toplevel test driver is a VBox thing that must be derived from the +base TestDriver class or maybe the VBox one. It should move from +toptestdriver to testdriver and be renamed to vboxtltd or smth.
  • +
  • Create a vbox testdriver that boots the t-xppro VM once and that's it.
  • +
  • Create a selftest setup which tests booting t-xppro taking builds from +the tinderbox.
  • +
+
+
+
+

Milestone #3

+

The goal for this milestone is configuration and converting current testcases, +the result will be the a minimal test deployment (4-5 new testboxes).

+

Tasks in somewhat prioritized order:

+
+
    +
  • Implement testcase configuration.
  • +
  • Implement testgroup configuration.
  • +
  • Implement build source configuration.
  • +
  • Implement scheduling group configuration.
  • +
  • Implement global resource configuration.
  • +
  • Re-visit the testbox configuration.
  • +
  • Black listing of builds.
  • +
  • Implement simple failure analysis and reporting.
  • +
  • Implement the initial smoke tests modelled on the current smoke tests.
  • +
  • Implement installation tests for Windows guests.
  • +
  • Implement installation tests for Linux guests.
  • +
  • Implement installation tests for Solaris guest.
  • +
  • Implement installation tests for OS/2 guest.
  • +
  • Set up a small test deployment.
  • +
+
+
+
+

Further work

+

After milestone #3 has been reached and issues found by the other team members +have been addressed, we will probably go for full deployment.

+

Beyond this point we will need to improve reporting and analysis. There may be +configuration aspects needing reporting as well.

+

Once deployed, a golden rule will be that all new features shall have test +coverage. Preferably, implemented by someone else and prior to the feature +implementation.

+
+
+
+

Discussion Logs

+
+

2009-07-21,22,23 Various Discussions with Michal and/or Klaus

+
    +
  • Scheduling of tests requiring more than one testbox.
  • +
  • Scheduling of tests that cannot be executing concurrently on several machines +because of some global resource like an iSCSI target.
  • +
  • Manually create the test config permutations instead of having the test +manager create all possible ones and wasting time.
  • +
  • Distinguish between built types so we can run smoke tests on strick builds as +well as release ones.
  • +
+
+
+

2009-07-20 Brief Discussion with Michal

+
    +
  • Installer for the testbox script to make bringing up a new testbox even +smoother.
  • +
+
+
+

2009-07-16 Raw Input

+
    +
  • +
    test set. recursive collection of:
    +
      +
    • hierachical subtest name (slash sep)
    • +
    • test parameters / config
    • +
    • bool fail/succ
    • +
    • attributes (typed?)
    • +
    • test time
    • +
    • e.g. throughput
    • +
    • subresults
    • +
    • log
    • +
    • screenshots,....
    • +
    +
    +
    +
  • +
  • client package (zip) dl from server (maybe client caching)
  • +
  • +
    thoughts on bits to do at once.
    +
      +
    • We really need the basic bits ASAP.
    • +
    • client -> support for test driver
    • +
    • server -> controls configs
    • +
    • cleanup on both sides
    • +
    +
    +
    +
  • +
+
+
+

2009-07-15 Raw Input

+
    +
  • testing should start automatically
  • +
  • switching to branch too tedious
  • +
  • useful to be able to partition testboxes (run specific builds on some boxes, let an engineer have a few boxes for a while).
  • +
  • test specification needs to be more flexible (select tests, disable test, test scheduling (run certain tests nightly), ... )
  • +
  • testcase dependencies (blacklisting builds, run smoketests on box A before long tests on box B, ...)
  • +
  • more testing flexibility, more test than just install/moke. For instance unit tests, benchmarks, ...
  • +
  • presentation/analysis: graphs!, categorize bugs, columns reorganizing grouped by test (hierarchical), overviews, result for last day.
  • +
  • testcase specificion, variables (e.g. I/O-APIC, SMP, HWVIRT, SATA...) as sub-tests
  • +
  • interation with ILOM/...: reset systems
  • +
  • Changes needs LDAP authentication
  • +
  • historize all configuration w/ name
  • +
  • ability to run testcase locally (provided the VDI/ISO/whatever extra requirements can be met).
  • +
+
+ + + + + +
[1]no such footnote
+
+ +++ + + + + + +
Status:$Id: AutomaticTestingRevamp.html $
Copyright:Copyright (C) 2010-2023 Oracle Corporation.
+
+
+
+ + diff --git a/src/VBox/ValidationKit/docs/AutomaticTestingRevamp.txt b/src/VBox/ValidationKit/docs/AutomaticTestingRevamp.txt new file mode 100644 index 00000000..17d920ba --- /dev/null +++ b/src/VBox/ValidationKit/docs/AutomaticTestingRevamp.txt @@ -0,0 +1,1061 @@ + +Revamp of Automatic VirtualBox Testing +====================================== + + +Introduction +------------ + +This is the design document for a revamped automatic testing framework. +The revamp aims at replacing the current tinderbox based testing by a new +system that is written from scratch. + +The old system is not easy to work with and was never meant to be used for +managing tests, after all it just a simple a build manager tailored for +contiguous building. Modifying the existing tinderbox system to do what +we want would require fundamental changes that would render it useless as +a build manager, it would therefore end up as a fork. The amount of work +required would probably be about the same as writing a new system from +scratch. Other considerations, such as the license of the tinderbox +system (MPL) and language it is realized in (Perl), are also in favor of +doing it from scratch. + +The language envisioned for the new automatic testing framework is Python. This +is for several reasons: + + - The VirtualBox API has Python bindings. + - Python is used quite a bit inside Sun (dunno about Oracle). + - Works relatively well with Apache for the server side bits. + - It is more difficult to produce write-only code in Python (alias the + we-don't-like-perl argument). + - You don't need to compile stuff. + +Note that the author of this document has no special training as a test +engineer and may therefore be using the wrong terms here and there. The +primary focus is to express what we need to do in order to improve +testing. + +This document is written in reStructuredText (rst) which just happens to +be used by Python, the primary language for this revamp. For more +information on reStructuredText: http://docutils.sourceforge.net/rst.html + + +Definitions / Glossary +====================== + +sub-test driver + A set of test cases that can be used by more than one test driver. Could + also be called a test unit, in the pascal sense of unit, if it wasn't so + easily confused with 'unit test'. + +test + This is somewhat ambiguous and this document try avoid using it where + possible. When used it normally refers to doing testing by executing one or + more testcases. + +test case + A set of inputs, test programs and expected results. It validates system + requirements and generates a pass or failed status. A basic unit of testing. + Note that we use the term in a rather broad sense. + +test driver + A program/script used to execute a test. Also known as a test harness. + Generally abbreviated 'td'. It can have sub-test drivers. + +test manager + Software managing the automatic testing. This is a web application that runs + on a dedicated server (tindertux). + +test set + The output of testing activity. Logs, results, ++. Our usage of this should + probably be renamed to 'test run'. + +test group + A collection of related test cases. + +testbox + A computer that does testing. + +testbox script + Script executing orders from the test manager on a testbox. Started + automatically upon bootup. + +testing + todo + +TODO: Check that we've got all this right and make them more exact + where possible. + +See also http://encyclopedia2.thefreedictionary.com/testing%20types +and http://www.aptest.com/glossary.html . + + + +Objectives +========== + + - A scalable test manager (>200 testboxes). + - Optimize the web user interface (WUI) for typical workflows and analysis. + - Efficient and flexibile test configuration. + - Import test result from other test systems (logo testing, VDI, ++). + - Easy to add lots of new testscripts. + - Run tests locally without a manager. + - Revamp a bit at the time. + + + +The Testbox Side +================ + +Each testbox has a unique name corresponding to its DNS zone entry. When booted +a testbox script is started automatically. This script will query the test +manager for orders and execute them. The core order downloads and executes a +test driver with parameters (configuration) from the server. The test driver +does all the necessary work for executing the test. In a typical VirtualBox +test this means picking a build, installing it, configuring VMs, running the +test VMs, collecting the results, submitting them to the server, and finally +cleaning up afterwards. + +The testbox environment which the test drivers are executed in will have a +number of environment variables for determining location of the source images +and other test data, scratch space, test set id, server URL, and so on and so +forth. + +On startup, the testbox script will look for crash dumps and similar on +systems where this is possible. If any sign of a crash is found, it will +put any dumps and reports in the upload directory and inform the test +manager before reporting for duty. In order to generate the proper file +names and report the crash in the right test set as well as prevent +reporting crashes unrelated to automatic testing, the testbox script will +keep information (test set id, ++) in a separate scratch directory +(${TESTBOX_PATH_SCRATCH}/../testbox) and make sure it is synced to the +disk (both files and directories). + +After checking for crashes, the testbox script will clean up any previous test +which might be around. This involves first invoking the test script in cleanup +mode and the wiping the scratch space. + +When reporting for duty the script will submit information about the host: OS +name, OS version, OS bitness, CPU vendor, total number of cores, VT-x support, +AMD-V support, amount of memory, amount of scratch space, and anything else that +can be found useful for scheduling tests or filtering test configurations. + + + +Testbox Script Orders +--------------------- + +The orders are kept in a queue on the server and the testbox script will fetch +them one by one. Orders that cannot be executed at the moment will be masked in +the query from the testbox. + +Execute Test Driver + Downloads and executes the a specified test driver with the given + configuration (arguments). Only one test driver can be executed at a time. + The server can specify more than one ZIP file to be downloaded and unpacked + before executing the test driver. The testbox script may cache these zip + files using http time stamping. + +Abort Test Driver + Aborts the current test driver. This will drop a hint to the driver and give + it 60 seconds to shut down the normal way. If that fails, the testbox script + will kill the driver processes (SIGKILL or equivalent), invoke the + testdriver in cleanup mode, and finally wipe the scratch area. Should either + of the last two steps fail in some way, the testbox will be rebooted. + +Idle + Ask again in X seconds, where X is specified by the server. + +Reboot + Reboot the testbox. If a test driver is current running, an attempt at + aborting it (Abort Test Driver) will be made first. + +Update + Updates the testbox script. The order includes a server relative path to the + new testbox script. This can only be executed when no test driver is + currently being executed. + + +Testbox Environment: Variables +------------------------------ + +COMSPEC + This will be set to C:\Windows\System32\cmd.exe on Windows. + +PATH + This will contain the kBuild binary directory for the host platform. + +SHELL + This will be set to point to kmk_ash(.exe) on all platforms. + +TESTBOX_NAME + The testbox name. + This is not required by the local reporter. + +TESTBOX_PATH_BUILDS + The absolute path to where the build repository can be found. This should be + a read only mount when possible. + +TESTBOX_PATH_RESOURCES + The absolute path to where static test resources like ISOs and VDIs can be + found. The test drivers knows the layout of this. This should be a read only + mount when possible. + +TESTBOX_PATH_SCRATCH + The absolute path to the scratch space. This is the current directory when + starting the test driver. It will be wiped automatically after executing the + test. + (Envisioned as ${TESTBOX_PATH_SCRIPTS}/../scratch and that + ${TESTBOX_PATH_SCRATCH}/ will be automatically wiped by the testbox script.) + +TESTBOX_PATH_SCRIPTS + The absolute path to the test driver and the other files that was unzipped + together with it. This is also where the test-driver-abort file will be put. + (Envisioned as ${TESTBOX_PATH_SCRATCH}/../driver, see above.) + +TESTBOX_PATH_UPLOAD + The absolute path to the upload directory for the testbox. This is for + putting VOBs, PNGs, core dumps, crash dumps, and such on. The files should be + bzipped or zipped if they aren't compress already. The names should contain + the testbox and test set ID. + +TESTBOX_REPORTER + The name of the test reporter back end. If not present, it will default to + the local reporter. + +TESTBOX_TEST_SET_ID + The test set ID if we're running. + This is not required by the local reporter. + +TESTBOX_MANAGER_URL + The URL to the test manager. + This is not required by the local reporter. + +TESTBOX_XYZ + There will probably be some more of these. + + +Testbox Environment: Core Utilities +----------------------------------- + +The testbox will not provide the typical unix /bin and /usr/bin utilities. In +other words, cygwin will not be used on Windows! + +The testbox will provide the unixy utilities that ships with kBuild and possibly +some additional ones from tools/*.*/bin in the VirtualBox tree (wget, unzip, +zip, and so on). The test drivers will avoid invoking any of these utilities +directly and instead rely on generic utility methods in the test driver +framework. That way we can more easily reimplement the functionality of the +core utilities and drop the dependency on them. It also allows us to quickly +work around platform specific oddities and bugs. + + +Test Drivers +------------ + +The test drivers are programs that will do the actual testing. In addition to +run under the testbox script, they can be executed in the VirtualBox development +environment. This is important for bug analysis and for simplifying local +testing by the developers before committing changes. It also means the test +drivers can be developed locally in the VirtualBox development environment. + +The main difference between executing a driver under the testbox script and +running it manually is that there is no test manager in the latter case. The +test result reporter will not talk to the server, but report things to a local +log file and/or standard out/err. When invoked manually, all the necessary +arguments will need to be specified by hand of course - it should be possible +to extract them from a test set as well. + +For the early implementation stages, an implementation of the reporter interface +that talks to the tinderbox base test manager will be needed. This will be +dropped later on when a new test manager is ready. + +As hinted at in other sections, there will be a common framework +(libraries/packages/classes) for taking care of the tedious bits that every +test driver needs to do. Sharing code is essential to easing test driver +development as well as reducing their complexity. The framework will contain: + + - A generic way of submitting output. This will be a generic interface with + multiple implementation, the TESTBOX_REPORTER environment variable + will decide which of them to use. The interface will have very specific + methods to allow the reporter to do a best possible job in reporting the + results to the test manager. + + - Helpers for typical tasks, like: + - Copying files. + - Deleting files, directory trees and scratch space. + - Unzipping files. + - Creating ISOs + - And such things. + + - Helpers for installing and uninstalling VirtualBox. + + - Helpers for defining VMs. (The VBox API where available.) + + - Helpers for controlling VMs. (The VBox API where available.) + +The VirtualBox bits will be separate from the more generic ones, simply because +this is cleaner it will allow us to reuse the system for testing other products. + +The framework will be packaged in a zip file other than the test driver so we +don't waste time and space downloading the same common code. + +The test driver will poll for the file +${TESTBOX_PATH_SCRIPTS}/test-driver-abort and abort all testing when it sees it. + +The test driver can be invoked in three modes: execute, help and cleanup. The +default is execute mode, the help shows an configuration summary and the cleanup +is for cleaning up after a reboot or aborted run. The latter is done by the +testbox script on startup and after abort - the driver is expected to clean up +by itself after a normal run. + + + +The Server Side +=============== + +The server side will be implemented using a webserver (apache), a database +(postgres) and cgi scripts (Python). In addition a cron job (Python) running +once a minute will generate static html for frequently used pages and maybe +execute some other tasks for driving the testing forwards. The order queries +from the testbox script is the primary driving force in the system. The total +makes up the test manager. + +The test manager can be split up into three rough parts: + + - Configuration (of tests, testgroups and testboxes). + - Execution (of tests, collecting and organizing the output). + - Analysis (of test output, mostly about presentation). + + +Test Manager: Requirements +========================== + +List of requirements: + + - Two level testing - L1 quick smoke tests and L2 longer tests performed on + builds passing L1. (Klaus (IIRC) meant this could be realized using + test dependency.) + - Black listing builds (by revision or similar) known to be bad. + - Distinguish between build types so we can do a portion of the testing with + strict builds. + - Easy to re-configure build source for testing different branch or for + testing a release candidate. (Directory based is fine.) + - Useful to be able to partition testboxes (run specific builds on some + boxes, let an engineer have a few boxes for a while). + - Interaction with ILOM/...: reset systems. + - Be able to suspend testing on selected testboxes when doing maintenance + (where automatically resuming testing on reboot is undesired) or similar + activity. + - Abort testing on selected testboxes. + - Scheduling of tests requiring more than one testbox. + - Scheduling of tests that cannot be executing concurrently on several + machines because of some global resource like an iSCSI target. + - Jump the scheduling queue. Scheduling of specified test the next time a + testbox is available (optionally specifying which testbox to schedule it + on). + - Configure tests with variable configuration to get better coverage. Two modes: + - TM generates the permutations based on one or more sets of test script arguments. + - Each configuration permutation is specified manually. + - Test specification needs to be flexible (select tests, disable test, test + scheduling (run certain tests nightly), ... ). + - Test scheduling by hour+weekday and by priority. + - Test dependencies (test A depends on test B being successful). + - Historize all configuration data, in particular test configs (permutations + included) and testboxes. + - Test sets has at a minimum a build reference, a testbox reference and a + primary log associated with it. + - Test sets stores further result as a recursive collection of: + - hierarchical subtest name (slash sep) + - test parameters / config + - bool fail/succ + - attributes (typed?) + - test time + - e.g. throughput + - subresults + - log + - screenshots, video,... + - The test sets database structure needs to designed such that data mining + can be done in an efficient manner. + - Presentation/analysis: graphs!, categorize bugs, columns reorganizing + grouped by test (hierarchical), overviews, result for last day. + + + +Test Manager: Configuration +=========================== + + +Testboxes +--------- + +Configuration of testboxes doesn't involve much work normally. A testbox +is added manually to the test manager by entering the DNS entry and/or IP +address (the test manager resolves the missing one when necessary) as well as +the system UUID (when obtainable - should be displayed by the testbox script +installer). Queries from unregistered testboxes will be declined as a kind of +security measure, the incident should be logged in the webserver log if +possible. In later dealings with the client the System UUID will be the key +identifier. It's permittable for the IP address to change when the testbox +isn't online, but not while testing (just imagine live migration tests and +network tests). Ideally, the testboxes should not change IP address. + +The testbox edit function must allow changing the name and system UUID. + +One further idea for the testbox configuration is indicating what they are +capable of to filter out tests and test configurations that won't work on that +testbox. To examplify this take the ACP2 installation test. If the test +manager does not make sure the testbox have VT-x or AMD-v capabilities, the test +is surely going to fail. Other testbox capabilities would be total number of +CPU cores, memory size, scratch space. These testbox capabilities should be +collected automatically on bootup by the testbox script together with OS name, +OS version and OS bitness. + +A final thought, instead of outright declining all requests from new testboxes, +we could record the unregistered testboxes with ip, UUID, name, os info and +capabilities but mark them as inactive. The test operator can then activate +them on an activation page or edit the testbox or something. + + +Testcases +--------- + +We use the term testcase for a test. + + +Testgroups +---------- + +Testcases are organized into groups. A testcase can be member of more than one +group. The testcase gets a priority assigned to it in connection with the +group membership. + +Testgroups are picked up by a testbox partition (aka scheduling group) and a +prioirty, scheduling time restriction and dependencies on other test groups are +associated with the assignment. A testgroup can be used by several testbox +partitions. + +(This used to be called 'testsuites' but was renamed to avoid confusion with +the VBox Test Suite.) + + +Scheduling +---------- + +The initial scheduler will be modelled after what we're doing already on in the +tinderbox driven testing. It's best described as a best effort continuous +integration scheduler. Meaning, it will always use the latest build suitable +for a testcase. It will schedule on a testcase level, using the combined +priority of the testcase in the test group and the test group with the testbox +partition, trying to spread the test case argument variation out accordingly +over the whole scheduilng queue. Which argument variation to start with, is +not undefined (random would be best). + +Later, we may add other schedulers as needed. + + + +The Test Manager Database +========================= + +First a general warning: + + The guys working on this design are not database experts, web + programming experts or similar, rather we are low level guys + who's main job is x86 & AMD64 virtualization. So, please don't + be too hard on us. :-) + + +A logical table layout can be found in TestManagerDatabaseMap.png (created by +Oracle SQL Data Modeler, stored in TestManagerDatabase.dmd). The physical +database layout can be found in TestManagerDatabaseInit.pgsql postgreSQL +script. The script is commented. + + +Data History +------------ + +We need to somehow track configuration changes over time. We also need to +be able to query the exact configuration a test set was run with so we can +understand and make better use of the results. + +There are different techniques for archiving this, one is tuple-versioning +( http://en.wikipedia.org/wiki/Tuple-versioning ), another is log trigger +( http://en.wikipedia.org/wiki/Log_trigger ). We use tuple-versioning in +this database, with 'effective' as start date field name and 'expire' as +the end (exclusive). + +Tuple-versioning has a shortcoming wrt to keys, both primary and foreign. +The primary key of a table employing tuple-versioning is really +'id' + 'valid_period', where the latter is expressed using two fields +([effective...expire-1]). Only, how do you tell the database engine that +it should not allow overlapping valid_periods? Useful suggestions are +welcomed. :-) + +Foreign key references to a table using tuple-versioning is running into +trouble because of the time axis and that to our knowledge foreign keys +must reference exactly one row in the other table. When time is involved +what we wish to tell the database is that at any given time, there actually +is exactly one row we want to match in the other table, only we've no idea +how to express this. So, many foreign keys are not expressed in SQL of this +database. + +In some cases, we extend the tuple-versioning with a generation ID so that +normal foreign key referencing can be used. We only use this for recording +(references in testset) and scheduling (schedqueue), as using it more widely +would force updates (gen_id changes) to propagate into all related tables. + +See also: + - http://en.wikipedia.org/wiki/Slowly_changing_dimension + - http://en.wikipedia.org/wiki/Change_data_capture + - http://en.wikipedia.org/wiki/Temporal_database + + + +Test Manager: Execution +======================= + + + +Test Manager: Scenarios +======================= + + + +#1 - Testbox Signs On (At Bootup) +--------------------------------- + +The testbox supplies a number of inputs when reporting for duty: + - IP address. + - System UUID. + - OS name. + - OS version. + - CPU architecture. + - CPU count (= threads). + - CPU VT-x/AMD-V capability. + - CPU nested paging capability. + - Chipset I/O MMU capability. + - Memory size. + - Scratch size space (for testing). + - Testbox Script revision. + +Results: + - ACK or NACK. + - Testbox ID and name on ACK. + +After receiving a ACK the testbox will ask for work to do, i.e. continue with +scenario #2. In the NACK case, it will sleep for 60 seconds and try again. + + +Actions: + +1. Validate the testbox by looking the UUID up in the TestBoxes table. + If not found, NACK the request. SQL:: + + SELECT idTestBox, sName + FROM TestBoxes + WHERE uuidSystem = :sUuid + AND tsExpire = 'infinity'::timestamp; + +2. Check if any of the information by testbox script has changed. The two + sizes are normalized first, memory size rounded to nearest 4 MB and scratch + space is rounded down to nearest 64 MB. If anything changed, insert a new + row in the testbox table and historize the current one, i.e. set + OLD.tsExpire to NEW.tsEffective and get a new value for NEW.idGenTestBox. + +3. Check with TestBoxStatuses: + a) If there is an row for the testbox in it already clean up change it + to 'idle' state and deal with any open testset like described in + scenario #9. + b) If there is no row, add one with 'idle' state. + +4. ACK the request and pass back the idTestBox. + + +Note! Testbox.enabled is not checked here, that is only relevant when it asks + for a new task (scenario #2 and #5). + +Note! Should the testbox script detect changes in any of the inputs, it should + redo the sign in. + +Note! In scenario #8, the box will not sign on until it has done the reboot and + cleanup reporting! + + +#2 - Testbox Asks For Work To Do +--------------------------------- + + +Inputs: + - The testbox is supplying its IP indirectly. + - The testbox should supply its UUID and ID directly. + +Results: + - IDLE, WAIT, EXEC, REBOOT, UPGRADE, UPGRADE-AND-REBOOT, SPECIAL or DEAD. + +Actions: + +1. Validate the ID and IP by selecting the currently valid testbox row:: + + SELECT idGenTestBox, fEnabled, idSchedGroup, enmPendingCmd + FROM TestBoxes + WHERE id = :id + AND uuidSystem = :sUuid + AND ip = :ip + AND tsExpire = 'infinity'::timestamp; + + If NOT found return DEAD to the testbox client (it will go back to sign on + mode and retry every 60 seconds or so - see scenario #1). + + Note! The WUI will do all necessary clean-ups when deleting a testbox, so + contrary to the initial plans, we don't need to do anything more for + the DEAD status. + +2. Check with TestBoxStatuses (maybe joined with query from 1). + + If enmState is 'gang-gathering': Goto scenario #6 on timeout or pending + 'abort' or 'reboot' command. Otherwise, tell the testbox to WAIT [done]. + + If enmState is 'gang-testing': The gang has been gathered and execution + has been triggered. Goto 5. + + If enmState is not 'idle', change it to 'idle'. + + If idTestSet is not NULL, CALL scenario #9 to it up. + + If there is a pending abort command, remove it. + + If there is a pending command and the old state doesn't indicate that it was + being executed, GOTO scenario #3. + + Note! There should be a TestBoxStatuses row after executing scenario #1, + however should none be found for some funky reason, returning DEAD + will fix the problem (see above) + +3. If the testbox was marked as disabled, respond with an IDLE command to the + testbox [done]. (Note! Must do this after TestBoxStatuses maintenance from + point 2, or abandoned tests won't be cleaned up after a testbox is disabled.) + +4. Consider testcases in the scheduling queue, pick the first one which the + testbox can execute. There is a concurrency issue here, so we put and + exclusive lock on the SchedQueues table while considering its content. + + The cursor we open looks something like this:: + + SELECT idItem, idGenTestCaseArgs, + idTestSetGangLeader, cMissingGangMembers + FROM SchedQueues + WHERE idSchedGroup = :idSchedGroup + AND ( bmHourlySchedule is NULL + OR get_bit(bmHourlySchedule, :iHourOfWeek) = 1 ) --< does this work? + ORDER BY ASC idItem; + + If there no rows are returned (this can happen because no testgroups are + associated with this scheduling group, the scheduling group is disabled, + or because the queue is being regenerated), we will tell the testbox to + IDLE [done]. + + For each returned row we will: + a) Check testcase/group dependencies. + b) Select a build (and default testsuite) satisfying the dependencies. + c) Check the testcase requirements with that build in mind. + d) If idTestSetGangLeader is NULL, try allocate the necessary resources. + e) If it didn't check out, fetch the next row and redo from (a). + f) Tentatively create a new test set row. + g) If not gang scheduling: + - Next state: 'testing' + ElIf we're the last gang participant: + - Set idTestSetGangLeader to NULL. + - Set cMissingGangMembers to 0. + - Next state: 'gang-testing' + ElIf we're the first gang member: + - Set cMissingGangMembers to TestCaseArgs.cGangMembers - 1. + - Set idTestSetGangLeader to our idTestSet. + - Next state: 'gang-gathering' + Else: + - Decrement cMissingGangMembers. + - Next state: 'gang-gathering' + + If we're not gang scheduling OR cMissingGangMembers is 0: + Move the scheduler queue entry to the end of the queue. + + Update our TestBoxStatuses row with the new state and test set. + COMMIT; + +5. If state is 'testing' or 'gang-testing': + EXEC reponse. + + The EXEC response for a gang scheduled testcase includes a number of + extra arguments so that the script knows the position of the testbox + it is running on and of the other members. This means the that the + TestSet.iGangMemberNo is passed using --gang-member-no and the IP + addresses of the all gang members using --gang-ipv4- . + Else (state is 'gang-gathering'): + WAIT + + + +#3 - Pending Command When Testbox Asks For Work +----------------------------------------------- + +This is a subfunction of scenario #2 and #5. + +As seen in scenario #2, the testbox will send 'abort' commands to /dev/null +when it finds one when not executing a test. This includes when it reports +that the test has completed (no need to abort a completed test, wasting lot +of effort when standing at the finish line). + +The other commands, though, are passed back to the testbox. The testbox +script will respond with an ACK or NACK as it sees fit. If NACKed, the +pending command will be removed (pending_cmd set to none) and that's it. +If ACKed, the state of the testbox will change to that appropriate for the +command and the pending_cmd set to none. Should the testbox script fail to +respond, the command will be repeated the next time it asks for work. + + + +#4 - Testbox Uploads Results During Test +---------------------------------------- + + +TODO + + +#5 - Testbox Completes Test and Asks For Work +--------------------------------------------- + +This is very similar to scenario #2 + +TODO + + +#6 - Gang Gathering Timeout +--------------------------- + +This is a subfunction of scenario #2. + +When gathering a gang of testboxes for a testcase, we do not want to wait +forever and have testboxes doing nothing for hours while waiting for partners. +So, the gathering has a reasonable timeout (imagine something like 20-30 mins). + +Also, we need some way of dealing with 'abort' and 'reboot' commands being +issued while waiting. The easy way out is pretend it's a time out. + +When changing the status to 'gang-timeout' we have to be careful. First of all, +we need to exclusively lock the SchedQueues and TestBoxStatuses (in that order) +and re-query our status. If it changed redo the checks in scenario #2 point 2. + +If we still want to timeout/abort, change the state from 'gang-gathering' to +'gang-gathering-timedout' on all the gang members that has gathered so far. +Then reset the scheduling queue record and move it to the end of the queue. + + +When acting on 'gang-timeout' the TM will fail the testset in a manner similar +to scenario #9. No need to repeat that. + + + +#7 - Gang Cleanup +----------------- + +When a testbox completes a gang scheduled test, we will have to serialize +resource cleanup (both globally and on testboxes) as they stop. More details +can be found in the documentation of 'gang-cleanup'. + +So, the transition from 'gang-testing' is always to 'gang-cleanup'. When we +can safely leave 'gang-cleanup' is decided by the query:: + + SELECT COUNT(*) + FROM TestBoxStatuses, + TestSets + WHERE TestSets.idTestSetGangLeader = :idTestSetGangLeader + AND TestSets.idTestBox = TestBoxStatuses.idTestBox + AND TestBoxStatuses.enmState = 'gang-running'::TestBoxState_T; + +As long as there are testboxes still running, we stay in the 'gang-cleanup' +state. Once there are none, we continue closing the testset and such. + + + +#8 - Testbox Reports A Crash During Test Execution +-------------------------------------------------- + +TODO + + +#9 - Cleaning Up Abandoned Testcase +----------------------------------- + +This is a subfunction of scenario #1 and #2. The actions taken are the same in +both situations. The precondition for taking this path is that the row in the +testboxstatus table is referring to a testset (i.e. testset_id is not NULL). + + +Actions: + +1. If the testset is incomplete, we need to completed: + a) Add a message to the root TestResults row, creating one if necessary, + that explains that the test was abandoned. This is done + by inserting/finding the string into/in TestResultStrTab and adding + a row to TestResultMsgs with idStrMsg set to that string id and + enmLevel set to 'failure'. + b) Mark the testset as failed. + +2. Free any global resources referenced by the test set. This is done by + deleting all rows in GlobalResourceStatuses matching the testbox id. + +3. Set the idTestSet to NULL in the TestBoxStatuses row. + + + +#10 - Cleaning Up a Disabled/Dead TestBox +----------------------------------------- + +The UI needs to be able to clean up the remains of a testbox which for some +reason is out of action. Normal cleaning up of abandoned testcases requires +that the testbox signs on or asks for work, but if the testbox is dead or +in some way indisposed, it won't be doing any of that. So, the testbox +sheriff needs to have a way of cleaning up after it. + +It's basically a manual scenario #9 but with some safe guards, like checking +that the box hasn't been active for the last 1-2 mins (max idle/wait time * 2). + + +Note! When disabling a box that still executing the testbox script, this + cleanup isn't necessary as it will happen automatically. Also, it's + probably desirable that the testbox finishes what ever it is doing first + before going dormant. + + + +Test Manager: Analysis +======================= + +One of the testbox sheriff's tasks is to try figure out the reason why something +failed. The test manager will provide facilities for doing so from very early +in it's implementation. + + +We need to work out some useful status reports for the early implementation. +Later there will be more advanced analysis tools, where for instance we can +create graphs from selected test result values or test execution times. + + + +Implementation Plan +=================== + +This has changed for various reasons. The current plan is to implement the +infrastructure (TM & testbox script) first and do a small deployment with the +2-5 test drivers in the Testsuite as basis. Once the bugs are worked out, we +will convert the rest of the tests and start adding new ones. + +We just need to finally get this done, no point in doing it piecemeal by now! + + +Test Manager Implementation Sub-Tasks +------------------------------------- + +The implementation of the test manager and adjusting/completing of the testbox +script and the test drivers are tasks which can be done by more than one +person. Splitting up the TM implementation into smaller tasks should allow +parallel development of different tasks and get us working code sooner. + + +Milestone #1 +------------ + +The goal is to getting the fundamental testmanager engine implemented, debugged +and working. With the exception of testboxes, the configuration will be done +via SQL inserts. + +Tasks in somewhat prioritized order: + + - Kick off test manager. It will live in testmanager/. Salvage as much as + possible from att/testserv. Create basic source and file layout. + + - Adjust the testbox script, part one. There currently is a testbox script + in att/testbox, this shall be moved up into testboxscript/. The script + needs to be adjusted according to the specification layed down earlier + in this document. Installers or installation scripts for all relevant + host OSes are required. Left for part two is result reporting beyond the + primary log. This task must be 100% feature complete, on all host OSes, + there is no room for FIXME, XXX or @todo here. + + - Implement the schedule queue generator. + + - Implement the testbox dispatcher in TM. Support all the testbox script + responses implemented above, including upgrading the testbox script. + + - Implement simple testbox management page. + + - Implement some basic activity and result reports so that we can see + what's going on. + + - Create a testmanager / testbox test setup. This lives in selftest/. + + 1. Set up something that runs, no fiddly bits. Debug till it works. + 2. Create a setup that tests testgroup dependencies, i.e. real tests + depending on smoke tests. + 3. Create a setup that exercises testcase dependency. + 4. Create a setup that exercises global resource allocation. + 5. Create a setup that exercises gang scheduling. + + - Check that all features work. + + +Milestone #2 +------------ + +The goal is getting to VBox testing. + +Tasks in somewhat prioritized order: + + - Implement full result reporting in the testbox script and testbox driver. + A testbox script specific reporter needs to be implemented for the + testdriver framework. The testbox script needs to forward the results to + the test manager, or alternatively the testdriver report can talk + directly to the TM. + + - Implement the test manager side of the test result reporting. + + - Extend the selftest with some setup that report all kinds of test + results. + + - Implement script/whatever feeding builds to the test manager from the + tinderboxes. + + - The toplevel test driver is a VBox thing that must be derived from the + base TestDriver class or maybe the VBox one. It should move from + toptestdriver to testdriver and be renamed to vboxtltd or smth. + + - Create a vbox testdriver that boots the t-xppro VM once and that's it. + + - Create a selftest setup which tests booting t-xppro taking builds from + the tinderbox. + + +Milestone #3 +------------ + +The goal for this milestone is configuration and converting current testcases, +the result will be the a minimal test deployment (4-5 new testboxes). + +Tasks in somewhat prioritized order: + + - Implement testcase configuration. + + - Implement testgroup configuration. + + - Implement build source configuration. + + - Implement scheduling group configuration. + + - Implement global resource configuration. + + - Re-visit the testbox configuration. + + - Black listing of builds. + + - Implement simple failure analysis and reporting. + + - Implement the initial smoke tests modelled on the current smoke tests. + + - Implement installation tests for Windows guests. + + - Implement installation tests for Linux guests. + + - Implement installation tests for Solaris guest. + + - Implement installation tests for OS/2 guest. + + - Set up a small test deployment. + + +Further work +------------ + +After milestone #3 has been reached and issues found by the other team members +have been addressed, we will probably go for full deployment. + +Beyond this point we will need to improve reporting and analysis. There may be +configuration aspects needing reporting as well. + +Once deployed, a golden rule will be that all new features shall have test +coverage. Preferably, implemented by someone else and prior to the feature +implementation. + + + + +Discussion Logs +=============== + +2009-07-21,22,23 Various Discussions with Michal and/or Klaus +------------------------------------------------------------- + +- Scheduling of tests requiring more than one testbox. +- Scheduling of tests that cannot be executing concurrently on several machines + because of some global resource like an iSCSI target. +- Manually create the test config permutations instead of having the test + manager create all possible ones and wasting time. +- Distinguish between built types so we can run smoke tests on strick builds as + well as release ones. + + +2009-07-20 Brief Discussion with Michal +---------------------------------------- + +- Installer for the testbox script to make bringing up a new testbox even + smoother. + + +2009-07-16 Raw Input +-------------------- + +- test set. recursive collection of: + - hierachical subtest name (slash sep) + - test parameters / config + - bool fail/succ + - attributes (typed?) + - test time + - e.g. throughput + - subresults + - log + - screenshots,.... + +- client package (zip) dl from server (maybe client caching) + + +- thoughts on bits to do at once. + - We *really* need the basic bits ASAP. + - client -> support for test driver + - server -> controls configs + - cleanup on both sides + + +2009-07-15 Raw Input +-------------------- + +- testing should start automatically +- switching to branch too tedious +- useful to be able to partition testboxes (run specific builds on some boxes, let an engineer have a few boxes for a while). +- test specification needs to be more flexible (select tests, disable test, test scheduling (run certain tests nightly), ... ) +- testcase dependencies (blacklisting builds, run smoketests on box A before long tests on box B, ...) +- more testing flexibility, more test than just install/moke. For instance unit tests, benchmarks, ... +- presentation/analysis: graphs!, categorize bugs, columns reorganizing grouped by test (hierarchical), overviews, result for last day. +- testcase specificion, variables (e.g. I/O-APIC, SMP, HWVIRT, SATA...) as sub-tests +- interation with ILOM/...: reset systems +- Changes needs LDAP authentication +- historize all configuration w/ name +- ability to run testcase locally (provided the VDI/ISO/whatever extra requirements can be met). + + +----- + +.. [1] no such footnote + +----- + +:Status: $Id: AutomaticTestingRevamp.txt $ +:Copyright: Copyright (C) 2010-2023 Oracle Corporation. diff --git a/src/VBox/ValidationKit/docs/Makefile.kmk b/src/VBox/ValidationKit/docs/Makefile.kmk new file mode 100644 index 00000000..e92b5123 --- /dev/null +++ b/src/VBox/ValidationKit/docs/Makefile.kmk @@ -0,0 +1,69 @@ +# $Id: Makefile.kmk $ +## @file +# VirtualBox Validation Kit - Makefile for generating .html from .txt. +# + +# +# Copyright (C) 2006-2023 Oracle and/or its affiliates. +# +# This file is part of VirtualBox base platform packages, as +# available from https://www.virtualbox.org. +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation, in version 3 of the +# License. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, see . +# +# The contents of this file may alternatively be used under the terms +# of the Common Development and Distribution License Version 1.0 +# (CDDL), a copy of it is provided in the "COPYING.CDDL" file included +# in the VirtualBox distribution, in which case the provisions of the +# CDDL are applicable instead of those of the GPL. +# +# You may elect to license modified versions of this file under the +# terms and conditions of either the GPL or the CDDL or both. +# +# SPDX-License-Identifier: GPL-3.0-only OR CDDL-1.0 +# + +DEPTH = ../../../.. +include $(KBUILD_PATH)/header.kmk + +# Figure out where rst2html.py is. +ifndef VBOX_RST2HTML + VBOX_RST2HTML := $(firstword $(which $(foreach pyver, 3.2 3.1 3.0 2.8 2.7 2.6 2.5 2.4 ,rst2html-$(pyver).py) ) ) + ifeq ($(VBOX_RST2HTML),) + if $(KBUILD_HOST) == "win" && $(VBOX_BLD_PYTHON) != "" && $(dir $(VBOX_BLD_PYTHON)) != "./" + VBOX_RST2HTML := $(dir $(VBOX_BLD_PYTHON))Scripts/rst2html.py + else + VBOX_RST2HTML := rst2html.py + endif + endif + if1of ($(KBUILD_HOST), win) + VBOX_RST2HTML := $(VBOX_BLD_PYTHON) $(VBOX_RST2HTML) + endif +endif + +GENERATED_FILES = \ + AutomaticTestingRevamp.html \ + VBoxValidationKitReadMe.html \ + VBoxAudioValidationKitReadMe.html \ + TestBoxImaging.html + +all: $(GENERATED_FILES) + +$(foreach html,$(GENERATED_FILES) \ +,$(eval $(html): $(basename $(html)).txt ; $$(REDIRECT) -E LC_ALL=C -- $$(VBOX_RST2HTML) --no-generator $$< $$@)) + +$(foreach html,$(GENERATED_FILES), $(eval $(basename $(html)).o:: $(html))) # editor compile aliases + +clean: + kmk_builtin_rm -f -- $(GENERATED_FILES) diff --git a/src/VBox/ValidationKit/docs/TestBoxImaging.html b/src/VBox/ValidationKit/docs/TestBoxImaging.html new file mode 100644 index 00000000..8675c137 --- /dev/null +++ b/src/VBox/ValidationKit/docs/TestBoxImaging.html @@ -0,0 +1,758 @@ + + + + + + +TestBoxImaging.txt + + + +
+ + +
+

Testbox Imaging (Backup / Restore)

+
+

Introduction

+

This document is explores deploying a very simple drive imaging solution to help +avoid needing to manually reinstall testboxes when a disk goes bust or the OS +install seems to be corrupted.

+
+
+
+

Definitions / Glossary

+

See AutomaticTestingRevamp.txt.

+
+
+

Objectives

+
+
    +
  • Off site, no admin interaction (no need for ILOM or similar).
  • +
  • OS independent.
  • +
  • Space and bandwidth efficient.
  • +
  • As automatic as possible.
  • +
  • Logging.
  • +
+
+
+
+

Overview of the Solution

+

Here is a brief summary:

+
+
    +
  • Always boot testboxes via PXE using PXELINUX.
  • +
  • Default configuration is local boot (hard disk / SSD)
  • +
  • Restore/backup action triggered by machine specific PXE config.
  • +
  • Boots special debian maintenance install off NFS.
  • +
  • A maintenance service (systemd style) does the work.
  • +
  • The service reads action from TFTP location and performs it.
  • +
  • When done the service removes the TFTP machine specific config +and reboots the system.
  • +
+
+
+
Maintenance actions are:
+
    +
  • backup
  • +
  • backup-again
  • +
  • restore
  • +
  • refresh-info
  • +
  • rescue
  • +
+
+
+

Possible modifier that indicates a subset of disk on testboxes with other OSes +installed. Support for partition level backup/restore is not explored here.

+
+

How to use

+

To perform one of the above maintenance actions on a testbox, run the +testbox-pxe-conf.sh script:

+
+/mnt/testbox-tftp/pxeclient.cfg/testbox-pxe-conf.sh 10.165.98.220 rescue
+
+

Then trigger a reboot. The box will then boot the NFS rooted debian image and +execute the maintenance action. On success, it will remove the testbox hex-IP +config file and reboot again.

+
+
+
+

Storage Server

+

The storage server will have three areas used here. Using NFS for all three +avoids extra work getting CIFS sharing right too (NFS is already a pain).

+
+
    +
  1. /export/testbox-tftp - TFTP config area. Read-write.
  2. +
  3. /export/testbox-backup - Images and logs. Read-write.
  4. +
  5. /export/testbox-nfsroot - Custom debian. Read-only, no root squash.
  6. +
+
+
+
+

TFTP (/export/testbox-tftp)

+

The testbox-tftp share needs to be writable, root squashing is okay.

+

We need files from both PXELINUX and SYSLINUX to make this work now. On a +debian system, the pxelinux and syslinux packages needs to be +installed. We actually do this further down when setting up the nfsroot, so +it's possible to get them from there by postponing this step a little. On +debian 8.6.0 the PXELINUX files are found in /usr/lib/PXELINUX and the +SYSLINUX ones in /usr/lib/syslinux.

+

The initial PXE image as well as associated modules comes in three variants, +BIOS, 32-bit EFI and 64-bit EFI. We'll only need the BIOS one for now. +Perform the following copy operations:

+
+cp /usr/lib/PXELINUX/pxelinux.0 /mnt/testbox-tftp/
+cp /usr/lib/syslinux/modules/*/ldlinux.* /mnt/testbox-tftp/
+cp -R /usr/lib/syslinux/modules/bios  /mnt/testbox-tftp/
+cp -R /usr/lib/syslinux/modules/efi32 /mnt/testbox-tftp/
+cp -R /usr/lib/syslinux/modules/efi64 /mnt/testbox-tftp/
+
+

For simplicity, all the testboxes boot using good old fashioned BIOS, no EFI. +However, it doesn't really hurt to be prepared.

+

The PXELINUX related files goes in the root of the testbox-tftp share. (As +mentioned further down, these can be installed on a debian system by running +apt-get install pxelinux syslinux.) We need the *pxelinux.0 files +typically found in /usr/lib/PXELINUX/ on debian systems (recent ones +anyway). It is possible we may need one ore more fo the modules [1] that +ships with PXELINUX/SYSLINUX, so do copy /usr/lib/syslinux/modules to +testbox-tftp/modules as well.

+

The directory layout related to the configuration files is dictated by the +PXELINUX configuration file searching algorithm [2]. Create a subdirectory +pxelinux.cfg/ under testbox-tftp and create the world readable file +default with the following content:

+
+PATH bios
+DEFAULT local-boot
+LABEL local-boot
+LOCALBOOT
+
+

This will make the default behavior to boot the local disk system.

+

Copy the testbox-pxe-conf.sh script file found in the same directory as +this document to /mnt/testbox-tftp/pxelinux.cfg/. Edit the copy to correct +the IP addresses near the top, as well as any linux, TFTP and PXE details near +the bottom of the file. This script will generate the PXE configuration file +when performing maintenance on a testbox.

+
+
+

Images and logs (/export/testbox-backup)

+

The testbox-backup share needs to be writable, root squashing is okay.

+

In the root there must be a file testbox-backup so we can easily tell +whether we've actually mounted the share or are just staring at an empty mount +point directory.

+

The testbox-maintenance.sh script maintains a global log in the root +directory that's called maintenance.log. Errors will be logged there as +well as a ping and the action.

+

We use a directory layout based on dotted decimal IP addresses here, so for a +server with the IP 10.40.41.42 all its file will be under 10.40.41.42/:

+
+
<hostname>
+
The name of the testbox (empty file). Help finding a testbox by name.
+
testbox-info.txt
+
Information about the testbox. Starting off with the name, decimal IP, +PXELINUX style hexadecimal IP, and more.
+
maintenance.log
+
Maintenance log file recording what the maintenance service does.
+
disk-devices.lst
+
Optional list of disk devices to consider backuping up or restoring. This is +intended for testboxes with additional disks that are used for other purposes +and should touched.
+
sda.raw.gz
+
The gzipped raw copy of the sda device of the testbox.
+
sd[bcdefgh].raw.gz
+
The gzipped raw copy sdb, sdc, sde, sdf, sdg, sdh, etc if any of them exists +and are disks/SSDs.
+
Note! If it turns out we can be certain to get a valid host name, we might just
+
switch to use the hostname as the directory name instead of the IP.
+
+
+
+

Debian NFS root (/export/testbox-nfsroot)

+

The testbox-nfsroot share should be read-only and must not have root +squashing enabled. Also, make sure setting the set-uid-bit is allowed by the +server, or su` and ``sudo won't work

+

There are several ways of creating a debian nfsroot, but since we've got a +tool like VirtualBox around we've just installed it in a VM, prepared it, +and copied it onto the NFS server share.

+

As of writing debian 8.6.0 is current, so a minimal 64-bit install of it was +done in a VM. After installation the following modifications was done:

+
+
    +
  • apt-get install pxelinux syslinux initramfs-tools zip gddrescue sudo joe +and optionally apt-get install smbclient cifs-utils.

    +
  • +
  • /etc/default/grub was modified to set GRUB_CMDLINE_LINUX_DEFAULT to +"" instead of "quiet". This allows us to see messages during boot +and perhaps spot why something doesn't work on a testbox. Regenerate the +grub configuration file by running update-grub afterwards.

    +
  • +
  • /etc/sudoers was modified to allow the vbox user use sudo without +requring any password.

    +
  • +
  • Create the directory /etc/systemd/system/getty@tty1.service.d and create +the file noclear.conf in it with the following content:

    +
    +[Service]
    +TTYVTDisallocate=no
    +
    +

    This stops getty from clearing VT1 and let us see the tail of the boot up +messages, which includes messages from the testbox-maintenance service.

    +
  • +
  • Mount the testbox-nfsroot under /mnt/ with write privileges. (The write +privileges are temporary - don't forget to remove them later on.):

    +
    +mount -t nfs myserver.com:/export/testbox-nfsroot
    +
    +

    Note! Adding -o nfsvers=3 may help with some NTFv4 servers.

    +
  • +
  • Copy the debian root and dev file system onto nfsroot. If you have ssh +access to the NFS server, the quickest way to do it is to use tar:

    +
    +tar -cz --one-file-system -f /mnt/testbox-maintenance-nfsroot.tar.gz . dev/
    +
    +

    An alternative is cp -ax . /mnt/. &&  cp -ax dev/. /mnt/dev/. but this +is quite a bit slower, obviously.

    +
  • +
  • Edit /etc/ssh/sshd_config setting PermitRootLogin to yes so we can ssh +in as root later on.

    +
  • +
  • chroot into the nfsroot: chroot /mnt/

    +
    +
      +
    • mount -o proc proc /proc

      +
    • +
    • mount -o sysfs sysfs /sys

      +
    • +
    • mkdir /mnt/testbox-tftp /mnt/testbox-backup

      +
    • +
    • Recreate /etc/fstab with:

      +
      +proc                             /proc               proc  defaults   0 0
      +/dev/nfs                         /                   nfs   defaults   1 1
      +10.42.1.1:/export/testbox-tftp   /mnt/testbox-tftp   nfs   tcp,nfsvers=3,noauto  2 2
      +10.42.1.1:/export/testbox-backup /mnt/testbox-backup nfs   tcp,nfsvers=3,noauto  3 3
      +
      +

      We use NFS version 3 as that works better for our NFS server and client, +remove if not necessary. The noauto option is to work around mount +trouble during early bootup on some of our boxes.

      +
    • +
    • Do mount /mnt/testbox-tftp && mount /mnt/testbox-backup to mount the +two shares. This may be a good time to execute the instructions in the +sections above relating to these two shares.

      +
    • +
    • Edit /etc/initramfs-tools/initramfs.conf and change the MODULES +value from most to netboot.

      +
    • +
    • Append aufs to /etc/initramfs-tools/modules. The advanced +multi-layered unification filesystem (aufs) enables us to use a +read-only NFS root. [3] [4] [5]

      +
    • +
    • Create /etc/initramfs-tools/scripts/init-bottom/00_aufs_init as +an executable file with the following content:

      +
      +#!/bin/sh
      +# Don't run during update-initramfs:
      +case "$1" in
      +    prereqs)
      +        exit 0;
      +        ;;
      +esac
      +
      +modprobe aufs
      +mkdir -p /ro /rw /aufs
      +mount -t tmpfs tmpfs /rw -o noatime,mode=0755
      +mount --move $rootmnt /ro
      +mount -t aufs aufs /aufs -o noatime,dirs=/rw:/ro=ro
      +mkdir -p /aufs/rw /aufs/ro
      +mount --move /ro /aufs/ro
      +mount --move /rw /aufs/rw
      +mount --move /aufs /root
      +exit 0
      +
      +
    • +
    • Update the init ramdisk: update-initramfs -u -k all

      +
      +
      Note! It may be necessary to do mount -t tmpfs tmpfs /var/tmp to help
      +

      this operation succeed.

      +
      +
      +
    • +
    • Copy /boot to /mnt/testbox-tftp/maintenance-boot/.

      +
    • +
    • Copy the testbox-maintenance.sh file found in the same directory as this +document to /root/scripts/ (need to create the dir) and make it +executable.

      +
    • +
    • Create the systemd service file for the maintenance service as +/etc/systemd/system/testbox-maintenance.service with the content:

      +
      +[Unit]
      +Description=Testbox Maintenance
      +After=network.target
      +Before=getty@tty1.service
      +
      +[Service]
      +Type=oneshot
      +RemainAfterExit=True
      +ExecStart=/root/scripts/testbox-maintenance.sh
      +ExecStartPre=/bin/echo -e \033%G
      +ExecReload=/bin/kill -HUP $MAINPID
      +WorkingDirectory=/tmp
      +Environment=TERM=xterm
      +StandardOutput=journal+console
      +
      +[Install]
      +WantedBy=multi-user.target
      +
      +
    • +
    • Enable our service: systemctl enable /etc/systemd/system/testbox-maintenance.service

      +
    • +
    • xxxx ... more ???

      +
    • +
    • Before leaving the chroot, do mount /proc /sys /mnt/testbox-*.

      +
    • +
    +
    +
  • +
  • Testing the setup from a VM is kind of useful (if the nfs server can be +convinced to accept root nfs mounts from non-privileged clinet ports):

    +
    +
      +
    • Create a VM using the 64-bit debian profile. Let's call it "pxe-vm".

      +
    • +
    • Mount the TFTP share somewhere, like M: or /mnt/testbox-tftp.

      +
    • +
    • Reconfigure the NAT DHCP and TFTP bits:

      +
      +VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/AboveDriver       NAT
      +VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/Action            mergeconfig
      +VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/Config/TFTPPrefix M:/
      +VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/Config/BootFile   pxelinux.0
      +
      +
    • +
    • Create the file testbox-tftp/pxelinux.cfg/0A00020F containing:

      +
      +PATH bios
      +DEFAULT maintenance
      +LABEL maintenance
      +  MENU LABEL Maintenance (NFS)
      +  KERNEL maintenance-boot/vmlinuz-3.16.0-4-amd64
      +  APPEND initrd=maintenance-boot/initrd.img-3.16.0-4-amd64 ro ip=dhcp aufs=tmpfs \
      +         boot=nfs root=/dev/nfs nfsroot=10.42.1.1:/export/testbox-nfsroot
      +LABEL local-boot
      +LOCALBOOT
      +
      +
    • +
    +
    +
  • +
+
+
+
+

Troubleshooting

+
+
PXE-E11 or something like No ARP reply
+
You probably got the TFTP and DHCP on different machines. Try move the TFTP +to the same machine as the DHCP, then the PXE stack won't have to do any +additional ARP resolving. Google results suggest that a congested network +could use the ARP reply to get lost. Our suspicion is that it might also be +related to the PXE stack shipping with the NIC.
+
+
+ + + + + +
[1]See http://www.syslinux.org/wiki/index.php?title=Category:Modules
+ + + + + +
[2]See http://www.syslinux.org/wiki/index.php?title=PXELINUX#Configuration
+ + + + + +
[3]See https://en.wikipedia.org/wiki/Aufs
+ + + + + +
[4]See http://shitwefoundout.com/wiki/Diskless_ubuntu
+ + + + + +
[5]See http://debianaddict.com/2012/06/19/diskless-debian-linux-booting-via-dhcppxenfstftp/
+
+ +++ + + + + + +
Status:$Id: TestBoxImaging.html $
Copyright:Copyright (C) 2010-2023 Oracle Corporation.
+
+
+ + diff --git a/src/VBox/ValidationKit/docs/TestBoxImaging.txt b/src/VBox/ValidationKit/docs/TestBoxImaging.txt new file mode 100644 index 00000000..9468d944 --- /dev/null +++ b/src/VBox/ValidationKit/docs/TestBoxImaging.txt @@ -0,0 +1,368 @@ + +Testbox Imaging (Backup / Restore) +================================== + + +Introduction +------------ + +This document is explores deploying a very simple drive imaging solution to help +avoid needing to manually reinstall testboxes when a disk goes bust or the OS +install seems to be corrupted. + + +Definitions / Glossary +====================== + +See AutomaticTestingRevamp.txt. + + +Objectives +========== + + - Off site, no admin interaction (no need for ILOM or similar). + - OS independent. + - Space and bandwidth efficient. + - As automatic as possible. + - Logging. + + +Overview of the Solution +======================== + +Here is a brief summary: + + - Always boot testboxes via PXE using PXELINUX. + - Default configuration is local boot (hard disk / SSD) + - Restore/backup action triggered by machine specific PXE config. + - Boots special debian maintenance install off NFS. + - A maintenance service (systemd style) does the work. + - The service reads action from TFTP location and performs it. + - When done the service removes the TFTP machine specific config + and reboots the system. + +Maintenance actions are: + - backup + - backup-again + - restore + - refresh-info + - rescue + +Possible modifier that indicates a subset of disk on testboxes with other OSes +installed. Support for partition level backup/restore is not explored here. + + +How to use +---------- + +To perform one of the above maintenance actions on a testbox, run the +``testbox-pxe-conf.sh`` script:: + + /mnt/testbox-tftp/pxeclient.cfg/testbox-pxe-conf.sh 10.165.98.220 rescue + +Then trigger a reboot. The box will then boot the NFS rooted debian image and +execute the maintenance action. On success, it will remove the testbox hex-IP +config file and reboot again. + + +Storage Server +============== + +The storage server will have three areas used here. Using NFS for all three +avoids extra work getting CIFS sharing right too (NFS is already a pain). + + 1. /export/testbox-tftp - TFTP config area. Read-write. + 2. /export/testbox-backup - Images and logs. Read-write. + 3. /export/testbox-nfsroot - Custom debian. Read-only, no root squash. + + +TFTP (/export/testbox-tftp) +============================ + +The testbox-tftp share needs to be writable, root squashing is okay. + +We need files from both PXELINUX and SYSLINUX to make this work now. On a +debian system, the ``pxelinux`` and ``syslinux`` packages needs to be +installed. We actually do this further down when setting up the nfsroot, so +it's possible to get them from there by postponing this step a little. On +debian 8.6.0 the PXELINUX files are found in ``/usr/lib/PXELINUX`` and the +SYSLINUX ones in ``/usr/lib/syslinux``. + +The initial PXE image as well as associated modules comes in three variants, +BIOS, 32-bit EFI and 64-bit EFI. We'll only need the BIOS one for now. +Perform the following copy operations:: + + cp /usr/lib/PXELINUX/pxelinux.0 /mnt/testbox-tftp/ + cp /usr/lib/syslinux/modules/*/ldlinux.* /mnt/testbox-tftp/ + cp -R /usr/lib/syslinux/modules/bios /mnt/testbox-tftp/ + cp -R /usr/lib/syslinux/modules/efi32 /mnt/testbox-tftp/ + cp -R /usr/lib/syslinux/modules/efi64 /mnt/testbox-tftp/ + + +For simplicity, all the testboxes boot using good old fashioned BIOS, no EFI. +However, it doesn't really hurt to be prepared. + +The PXELINUX related files goes in the root of the testbox-tftp share. (As +mentioned further down, these can be installed on a debian system by running +``apt-get install pxelinux syslinux``.) We need the ``*pxelinux.0`` files +typically found in ``/usr/lib/PXELINUX/`` on debian systems (recent ones +anyway). It is possible we may need one ore more fo the modules [1]_ that +ships with PXELINUX/SYSLINUX, so do copy ``/usr/lib/syslinux/modules`` to +``testbox-tftp/modules`` as well. + + +The directory layout related to the configuration files is dictated by the +PXELINUX configuration file searching algorithm [2]_. Create a subdirectory +``pxelinux.cfg/`` under ``testbox-tftp`` and create the world readable file +``default`` with the following content:: + + PATH bios + DEFAULT local-boot + LABEL local-boot + LOCALBOOT + +This will make the default behavior to boot the local disk system. + +Copy the ``testbox-pxe-conf.sh`` script file found in the same directory as +this document to ``/mnt/testbox-tftp/pxelinux.cfg/``. Edit the copy to correct +the IP addresses near the top, as well as any linux, TFTP and PXE details near +the bottom of the file. This script will generate the PXE configuration file +when performing maintenance on a testbox. + + +Images and logs (/export/testbox-backup) +========================================= + +The testbox-backup share needs to be writable, root squashing is okay. + +In the root there must be a file ``testbox-backup`` so we can easily tell +whether we've actually mounted the share or are just staring at an empty mount +point directory. + +The ``testbox-maintenance.sh`` script maintains a global log in the root +directory that's called ``maintenance.log``. Errors will be logged there as +well as a ping and the action. + +We use a directory layout based on dotted decimal IP addresses here, so for a +server with the IP 10.40.41.42 all its file will be under ``10.40.41.42/``: + +```` + The name of the testbox (empty file). Help finding a testbox by name. + +``testbox-info.txt`` + Information about the testbox. Starting off with the name, decimal IP, + PXELINUX style hexadecimal IP, and more. + +``maintenance.log`` + Maintenance log file recording what the maintenance service does. + +``disk-devices.lst`` + Optional list of disk devices to consider backuping up or restoring. This is + intended for testboxes with additional disks that are used for other purposes + and should touched. + +``sda.raw.gz`` + The gzipped raw copy of the sda device of the testbox. + +``sd[bcdefgh].raw.gz`` + The gzipped raw copy sdb, sdc, sde, sdf, sdg, sdh, etc if any of them exists + and are disks/SSDs. + + +Note! If it turns out we can be certain to get a valid host name, we might just + switch to use the hostname as the directory name instead of the IP. + + +Debian NFS root (/export/testbox-nfsroot) +========================================== + +The testbox-nfsroot share should be read-only and must **not** have root +squashing enabled. Also, make sure setting the set-uid-bit is allowed by the +server, or ``su` and ``sudo`` won't work + +There are several ways of creating a debian nfsroot, but since we've got a +tool like VirtualBox around we've just installed it in a VM, prepared it, +and copied it onto the NFS server share. + +As of writing debian 8.6.0 is current, so a minimal 64-bit install of it was +done in a VM. After installation the following modifications was done: + + - ``apt-get install pxelinux syslinux initramfs-tools zip gddrescue sudo joe`` + and optionally ``apt-get install smbclient cifs-utils``. + + - ``/etc/default/grub`` was modified to set ``GRUB_CMDLINE_LINUX_DEFAULT`` to + ``""`` instead of ``"quiet"``. This allows us to see messages during boot + and perhaps spot why something doesn't work on a testbox. Regenerate the + grub configuration file by running ``update-grub`` afterwards. + + - ``/etc/sudoers`` was modified to allow the ``vbox`` user use sudo without + requring any password. + + - Create the directory ``/etc/systemd/system/getty@tty1.service.d`` and create + the file ``noclear.conf`` in it with the following content:: + + [Service] + TTYVTDisallocate=no + + This stops getty from clearing VT1 and let us see the tail of the boot up + messages, which includes messages from the testbox-maintenance service. + + - Mount the testbox-nfsroot under ``/mnt/`` with write privileges. (The write + privileges are temporary - don't forget to remove them later on.):: + + mount -t nfs myserver.com:/export/testbox-nfsroot + + Note! Adding ``-o nfsvers=3`` may help with some NTFv4 servers. + + - Copy the debian root and dev file system onto nfsroot. If you have ssh + access to the NFS server, the quickest way to do it is to use ``tar``:: + + tar -cz --one-file-system -f /mnt/testbox-maintenance-nfsroot.tar.gz . dev/ + + An alternative is ``cp -ax . /mnt/. && cp -ax dev/. /mnt/dev/.`` but this + is quite a bit slower, obviously. + + - Edit ``/etc/ssh/sshd_config`` setting ``PermitRootLogin`` to ``yes`` so we can ssh + in as root later on. + + - chroot into the nfsroot: ``chroot /mnt/`` + + - ``mount -o proc proc /proc`` + + - ``mount -o sysfs sysfs /sys`` + + - ``mkdir /mnt/testbox-tftp /mnt/testbox-backup`` + + - Recreate ``/etc/fstab`` with:: + + proc /proc proc defaults 0 0 + /dev/nfs / nfs defaults 1 1 + 10.42.1.1:/export/testbox-tftp /mnt/testbox-tftp nfs tcp,nfsvers=3,noauto 2 2 + 10.42.1.1:/export/testbox-backup /mnt/testbox-backup nfs tcp,nfsvers=3,noauto 3 3 + + We use NFS version 3 as that works better for our NFS server and client, + remove if not necessary. The ``noauto`` option is to work around mount + trouble during early bootup on some of our boxes. + + - Do ``mount /mnt/testbox-tftp && mount /mnt/testbox-backup`` to mount the + two shares. This may be a good time to execute the instructions in the + sections above relating to these two shares. + + - Edit ``/etc/initramfs-tools/initramfs.conf`` and change the ``MODULES`` + value from ``most`` to ``netboot``. + + - Append ``aufs`` to ``/etc/initramfs-tools/modules``. The advanced + multi-layered unification filesystem (aufs) enables us to use a + read-only NFS root. [3]_ [4]_ [5]_ + + - Create ``/etc/initramfs-tools/scripts/init-bottom/00_aufs_init`` as + an executable file with the following content:: + + #!/bin/sh + # Don't run during update-initramfs: + case "$1" in + prereqs) + exit 0; + ;; + esac + + modprobe aufs + mkdir -p /ro /rw /aufs + mount -t tmpfs tmpfs /rw -o noatime,mode=0755 + mount --move $rootmnt /ro + mount -t aufs aufs /aufs -o noatime,dirs=/rw:/ro=ro + mkdir -p /aufs/rw /aufs/ro + mount --move /ro /aufs/ro + mount --move /rw /aufs/rw + mount --move /aufs /root + exit 0 + + - Update the init ramdisk: ``update-initramfs -u -k all`` + + Note! It may be necessary to do ``mount -t tmpfs tmpfs /var/tmp`` to help + this operation succeed. + + - Copy ``/boot`` to ``/mnt/testbox-tftp/maintenance-boot/``. + + - Copy the ``testbox-maintenance.sh`` file found in the same directory as this + document to ``/root/scripts/`` (need to create the dir) and make it + executable. + + - Create the systemd service file for the maintenance service as + ``/etc/systemd/system/testbox-maintenance.service`` with the content:: + + [Unit] + Description=Testbox Maintenance + After=network.target + Before=getty@tty1.service + + [Service] + Type=oneshot + RemainAfterExit=True + ExecStart=/root/scripts/testbox-maintenance.sh + ExecStartPre=/bin/echo -e \033%G + ExecReload=/bin/kill -HUP $MAINPID + WorkingDirectory=/tmp + Environment=TERM=xterm + StandardOutput=journal+console + + [Install] + WantedBy=multi-user.target + + - Enable our service: ``systemctl enable /etc/systemd/system/testbox-maintenance.service`` + + - xxxx ... more ??? + + - Before leaving the chroot, do ``mount /proc /sys /mnt/testbox-*``. + + + - Testing the setup from a VM is kind of useful (if the nfs server can be + convinced to accept root nfs mounts from non-privileged clinet ports): + + - Create a VM using the 64-bit debian profile. Let's call it "pxe-vm". + - Mount the TFTP share somewhere, like M: or /mnt/testbox-tftp. + - Reconfigure the NAT DHCP and TFTP bits:: + + VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/AboveDriver NAT + VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/Action mergeconfig + VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/Config/TFTPPrefix M:/ + VBoxManage setextradata pxe-vm VBoxInternal/PDM/DriverTransformations/pxe/Config/BootFile pxelinux.0 + + - Create the file ``testbox-tftp/pxelinux.cfg/0A00020F`` containing:: + + PATH bios + DEFAULT maintenance + LABEL maintenance + MENU LABEL Maintenance (NFS) + KERNEL maintenance-boot/vmlinuz-3.16.0-4-amd64 + APPEND initrd=maintenance-boot/initrd.img-3.16.0-4-amd64 ro ip=dhcp aufs=tmpfs \ + boot=nfs root=/dev/nfs nfsroot=10.42.1.1:/export/testbox-nfsroot + LABEL local-boot + LOCALBOOT + + +Troubleshooting +=============== + +``PXE-E11`` or something like ``No ARP reply`` + You probably got the TFTP and DHCP on different machines. Try move the TFTP + to the same machine as the DHCP, then the PXE stack won't have to do any + additional ARP resolving. Google results suggest that a congested network + could use the ARP reply to get lost. Our suspicion is that it might also be + related to the PXE stack shipping with the NIC. + + + +----- + +.. [1] See http://www.syslinux.org/wiki/index.php?title=Category:Modules +.. [2] See http://www.syslinux.org/wiki/index.php?title=PXELINUX#Configuration +.. [3] See https://en.wikipedia.org/wiki/Aufs +.. [4] See http://shitwefoundout.com/wiki/Diskless_ubuntu +.. [5] See http://debianaddict.com/2012/06/19/diskless-debian-linux-booting-via-dhcppxenfstftp/ + + +----- + +:Status: $Id: TestBoxImaging.txt $ +:Copyright: Copyright (C) 2010-2023 Oracle Corporation. diff --git a/src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.html b/src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.html new file mode 100644 index 00000000..fe4c2f6d --- /dev/null +++ b/src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.html @@ -0,0 +1,601 @@ + + + + + + +Audio Testing of VirtualBox + + + +
+

Audio Testing of VirtualBox

+ +
+

Overview / Goal

+

The goal is to create a flexible testing framework to test the +VirtualBox audio stack.

+

It should be runnable with an easy-to-use setup so that also regular users +can perform tests on request, without having to install or set up additional +dependencies.

+

That framework must be runnable on all host/guest combinations together with all +audio drivers ("backends") and device emulations being offered. This makes it a +rather big testing matrix which therefore has to be processed in an automated +fashion.

+

Additionally it should be flexible enough to add more (custom) tests later on.

+
+
+

Operation

+

The framework consists of several components which try to make use as much of +the existing audio stack code as possible. This allows the following +operation modes:

+
+
Standalone
+
Playing back / recording audio data (test tones / .WAV files) in a +standalone scenario, i.e. no VirtualBox / VMs required). This mode is using +VirtualBox' audio (mixing) stack and available backend drivers without the +need of VirtualBox being installed.
+
Manual
+
Performing single / multiple tests manually on a local machine. +Requires a running and set up test VM.
+
Automated
+
Performs single / multiple tests via the Validation Kit audio test +driver and can be triggered via the Validation Kit Test Manager.
+
(Re-)validation of previously ran tests
+
This takes two test sets and runs the validation / analysis on them.
+
Self testing mode
+
Performs standalone self tests to verify / debug the involved components.
+
+
+
+

Components and Terminology

+

The following components are in charge for performing the audio tests +(depends on the operation mode, see above):

+
    +
  • VBoxAudioTest (also known as VKAT, "Validation Kit Audio Test"): +A binary which can perform the standalone audio tests mentioned above, as well +as acting as the guest and host service(s) when performing manual or automated +tests. It also includes the analysis / verification of audio test sets. +VKAT also is included in host installations and Guest Additions since +VirtualBox 7.0 to give customers and end users the opportunity to test and +verify the audio stack.

    +
    +
    Additional features include:
    +
      +
    • Automatic probing of audio backends ("--probe-backends")
    • +
    • Manual playback of test tones ("play -t")
    • +
    • Manual playback of .WAV files ("play <WAV-File>")
    • +
    • Manual recording to .WAV files ("recording <WAV-File>")
    • +
    • Manual device enumeration (sub command "enum")
    • +
    • Manual (re-)verification of test sets (sub command "verify")
    • +
    • Self-contained self tests (sub command "selftest")
    • +
    +
    +
    +

    See the syntax help ("--help") for more.

    +
  • +
  • ATS ("Audio Testing Service"): Component which is being used by 1 and the +Validation Kit audio driver (backend) to communicate across guest and host +boundaries. Currently using a TCP/IP transport layer. Also works with VMs +which are configured with NAT networking ("reverse connection").

    +
  • +
  • Validation Kit audio test driver (tdAudioTest.py): Used for integrating and +invoking VKAT for manual and automated tests via the Validation Kit framework +(Test Manager). Optional. The test driver can be found at [1].

    +
  • +
  • Validation Kit audio driver (backend): A dedicated audio backend which +communicates with VKAT running on the same host to perform the actual audio +tests on a VirtualBox installation. This makes it possible to test the full +audio stack on a running VM without any additional / external tools.

    +

    On guest playback, data will be recorded, on guest recording, data will be +injected from the host into the audio stack.

    +
  • +
  • +
    Test sets contain
    +
      +
    • a test manifest with all information required (vkat_manifest.ini)
    • +
    • the generated / captured audio data (as raw PCM)
    • +
    +
    +
    +

    and are either packed as .tar.gz archives or consist of a dedicated directory +per test set.

    +

    There always must be at least two test sets - one from the host side and one +from the guest side - to perform a verification.

    +

    Each test set contains a test tag so that matching test sets can be +identified.

    +
  • +
+

The above components are also included in VirtualBox release builds and can be +optionally enabled (disabled by default).

+ + + + + +
[1]src/VBox/ValidationKit/tests/audio/tdAudioTest.py
+
+
+

Setup instructions

+
    +
  • VM needs to be configured to have audio emulation and audio testing enabled +(via extra-data, set "VBoxInternal2/Audio/Debug/Enabled" to "true").

    +
  • +
  • Audio input / output for the VM needs to be enabled (depending on the test).

    +
  • +
  • Start VBoxAudioTest on the guest, for example:

    +

    VBoxAudioTest test --mode guest --tcp-connect-address 10.0.2.2

    +
    +
    Note: VBoxAudioTest is included with the Guest Additions starting at
    +

    VirtualBox 7.0.

    +
    +
    Note: Depending on the VM's networking configuration there might be further
    +

    steps necessary in order to be able to reach the host from the guest. +See the VirtualBox manual for more information.

    +
    +
    +
  • +
+
+
+

Performing a manual test

+
    +
  • Follow "Setup instructions".

    +
  • +
  • Start VBoxAudioTest on the host with selected test(s), for example:

    +

    VBoxAudioTest test --mode host

    +
    +
    +
    Note: VBoxAudioTest is included with the VirtualBox 7.0 host installers and
    +

    will be installed by default.

    +
    +
    +
    +
  • +
  • By default the test verification will be done automatically after running the +tests.

    +
  • +
+
+
+

Advanced: Performing manual verification

+

VBoxAudioTest can manually be used with the "verify" sub command in order to +(re-)verify previously generated test sets. It then will return different exit +codes based on the verification result.

+
+
+

Advanced: Performing an automated test

+
    +
  • TxS (Test E[x]ecution Service) has to be up and running (part of the +Validation Kit) on the guest.
  • +
  • Invoke the tdAudioTest.py test driver, either manually or fully automated +via Test Manager.
  • +
+
+
+

Internals: Workflow for a single test

+

When a single test is being executed on a running VM, the following (simplified) +workflow applies:

+
    +
  • VKAT on the host connects to VKAT running on the guest (via ATS, also can be a +remote machine in theory).
  • +
  • VKAT on the host connects to Validation Kit audio driver on the host +(via ATS, also can be a remote machine in theory).
  • +
  • +
    For example, when doing playback tests, VKAT on the host ...
    +
      +
    • +
      ... tells the Validation Kit audio driver to start recording
      +
      guest playback.
      +
      +
    • +
    • ... tells the VKAT on the guest to start playing back audio data.
    • +
    • +
      ... gathers all test data (generated from/by the guest and recorded from
      +
      the host) as separate test sets.
      +
      +
    • +
    • ... starts verification / analysis of the test sets.
    • +
    +
    +
    +
  • +
+
+
+

Current status / limitations

+
    +
  • +
    The following test types are currently implemented:
    +
      +
    • Test tone (sine wave) playback from the guest
    • +
    • Test tone (sine wave) recording by the guest (injected from the host)
    • +
    +
    +
    +
  • +
  • Only the HDA device emulation has been verified so far.
  • +
  • Only the ALSA audio stack on Debian 10 has been verified so far. +Note: This is different from PulseAudio using the ALSA plugin!
  • +
+
+
+

Troubleshooting

+
    +
  • Make sure that audio device emulation is enabled and can be used within the +guest. Also, audio input / output has to be enabled, depending on the tests.
  • +
  • Make sure that the guest's VBoxAudioTest's instance can reach the host via +the selected transport layer (TCP/IP by default).
  • +
  • Increase the hosts audio logging level +(via extra-data, set "VBoxInternal2/Audio/Debug/Level" to "5").
  • +
  • Increase VBoxAudioTest's verbosity level (add "-v", can be specified +multiple times).
  • +
  • Check if the VBox release log contains any warnings / errors with the +"ValKit:" prefix.
  • +
+ +++ + + + + + +
Status:$Id: VBoxAudioValidationKitReadMe.html $
Copyright:Copyright (C) 2021-2023 Oracle Corporation.
+
+
+ + diff --git a/src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.txt b/src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.txt new file mode 100644 index 00000000..2797f0a9 --- /dev/null +++ b/src/VBox/ValidationKit/docs/VBoxAudioValidationKitReadMe.txt @@ -0,0 +1,207 @@ +Audio Testing of VirtualBox +=========================== + + +Overview / Goal +--------------- + +The goal is to create a flexible testing framework to test the +VirtualBox audio stack. + +It should be runnable with an easy-to-use setup so that also regular users +can perform tests on request, without having to install or set up additional +dependencies. + +That framework must be runnable on all host/guest combinations together with all +audio drivers ("backends") and device emulations being offered. This makes it a +rather big testing matrix which therefore has to be processed in an automated +fashion. + +Additionally it should be flexible enough to add more (custom) tests later on. + + +Operation +--------- + +The framework consists of several components which try to make use as much of +the existing audio stack code as possible. This allows the following +operation modes: + +Standalone + Playing back / recording audio data (test tones / .WAV files) in a + standalone scenario, i.e. no VirtualBox / VMs required). This mode is using + VirtualBox' audio (mixing) stack and available backend drivers without the + need of VirtualBox being installed. + +Manual + Performing single / multiple tests manually on a local machine. + Requires a running and set up test VM. + +Automated + Performs single / multiple tests via the Validation Kit audio test + driver and can be triggered via the Validation Kit Test Manager. + +(Re-)validation of previously ran tests + This takes two test sets and runs the validation / analysis on them. + +Self testing mode + Performs standalone self tests to verify / debug the involved components. + + +Components and Terminology +-------------------------- + +The following components are in charge for performing the audio tests +(depends on the operation mode, see above): + +- VBoxAudioTest (also known as VKAT, "Validation Kit Audio Test"): + A binary which can perform the standalone audio tests mentioned above, as well + as acting as the guest and host service(s) when performing manual or automated + tests. It also includes the analysis / verification of audio test sets. + VKAT also is included in host installations and Guest Additions since + VirtualBox 7.0 to give customers and end users the opportunity to test and + verify the audio stack. + + Additional features include: + * Automatic probing of audio backends ("--probe-backends") + * Manual playback of test tones ("play -t") + * Manual playback of .WAV files ("play ") + * Manual recording to .WAV files ("recording ") + * Manual device enumeration (sub command "enum") + * Manual (re-)verification of test sets (sub command "verify") + * Self-contained self tests (sub command "selftest") + + See the syntax help ("--help") for more. + +- ATS ("Audio Testing Service"): Component which is being used by 1 and the + Validation Kit audio driver (backend) to communicate across guest and host + boundaries. Currently using a TCP/IP transport layer. Also works with VMs + which are configured with NAT networking ("reverse connection"). + +- Validation Kit audio test driver (tdAudioTest.py): Used for integrating and + invoking VKAT for manual and automated tests via the Validation Kit framework + (Test Manager). Optional. The test driver can be found at [1]_. + +- Validation Kit audio driver (backend): A dedicated audio backend which + communicates with VKAT running on the same host to perform the actual audio + tests on a VirtualBox installation. This makes it possible to test the full + audio stack on a running VM without any additional / external tools. + + On guest playback, data will be recorded, on guest recording, data will be + injected from the host into the audio stack. + +- Test sets contain + - a test manifest with all information required (vkat_manifest.ini) + - the generated / captured audio data (as raw PCM) + + and are either packed as .tar.gz archives or consist of a dedicated directory + per test set. + + There always must be at least two test sets - one from the host side and one + from the guest side - to perform a verification. + + Each test set contains a test tag so that matching test sets can be + identified. + +The above components are also included in VirtualBox release builds and can be +optionally enabled (disabled by default). + +.. [1] src/VBox/ValidationKit/tests/audio/tdAudioTest.py + + +Setup instructions +------------------ + +- VM needs to be configured to have audio emulation and audio testing enabled + (via extra-data, set "VBoxInternal2/Audio/Debug/Enabled" to "true"). +- Audio input / output for the VM needs to be enabled (depending on the test). +- Start VBoxAudioTest on the guest, for example: + + VBoxAudioTest test --mode guest --tcp-connect-address 10.0.2.2 + + Note: VBoxAudioTest is included with the Guest Additions starting at + VirtualBox 7.0. + Note: Depending on the VM's networking configuration there might be further + steps necessary in order to be able to reach the host from the guest. + See the VirtualBox manual for more information. + + +Performing a manual test +------------------------ + +- Follow "Setup instructions". +- Start VBoxAudioTest on the host with selected test(s), for example: + + VBoxAudioTest test --mode host + + Note: VBoxAudioTest is included with the VirtualBox 7.0 host installers and + will be installed by default. + +- By default the test verification will be done automatically after running the + tests. + + +Advanced: Performing manual verification +---------------------------------------- + +VBoxAudioTest can manually be used with the "verify" sub command in order to +(re-)verify previously generated test sets. It then will return different exit +codes based on the verification result. + + +Advanced: Performing an automated test +-------------------------------------- + +- TxS (Test E[x]ecution Service) has to be up and running (part of the + Validation Kit) on the guest. +- Invoke the tdAudioTest.py test driver, either manually or fully automated + via Test Manager. + + +Internals: Workflow for a single test +------------------------------------- + +When a single test is being executed on a running VM, the following (simplified) +workflow applies: + +- VKAT on the host connects to VKAT running on the guest (via ATS, also can be a + remote machine in theory). +- VKAT on the host connects to Validation Kit audio driver on the host + (via ATS, also can be a remote machine in theory). +- For example, when doing playback tests, VKAT on the host ... + * ... tells the Validation Kit audio driver to start recording + guest playback. + * ... tells the VKAT on the guest to start playing back audio data. + * ... gathers all test data (generated from/by the guest and recorded from + the host) as separate test sets. + * ... starts verification / analysis of the test sets. + + +Current status / limitations +---------------------------- + +- The following test types are currently implemented: + * Test tone (sine wave) playback from the guest + * Test tone (sine wave) recording by the guest (injected from the host) +- Only the HDA device emulation has been verified so far. +- Only the ALSA audio stack on Debian 10 has been verified so far. + Note: This is different from PulseAudio using the ALSA plugin! + + +Troubleshooting +--------------- + +- Make sure that audio device emulation is enabled and can be used within the + guest. Also, audio input / output has to be enabled, depending on the tests. +- Make sure that the guest's VBoxAudioTest's instance can reach the host via + the selected transport layer (TCP/IP by default). +- Increase the hosts audio logging level + (via extra-data, set "VBoxInternal2/Audio/Debug/Level" to "5"). +- Increase VBoxAudioTest's verbosity level (add "-v", can be specified + multiple times). +- Check if the VBox release log contains any warnings / errors with the + "ValKit:" prefix. + + +:Status: $Id: VBoxAudioValidationKitReadMe.txt $ +:Copyright: Copyright (C) 2021-2023 Oracle Corporation. diff --git a/src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.html b/src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.html new file mode 100644 index 00000000..45b4acbe --- /dev/null +++ b/src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.html @@ -0,0 +1,467 @@ + + + + + + +The VirtualBox Validation Kit + + + +
+

The VirtualBox Validation Kit

+ +
+

Introduction

+

The VirtualBox Validation Kit is our new public tool for doing automated +testing of VirtualBox. We are continually working on adding new features +and guest operating systems to our battery of tests.

+

We warmly welcome contributions, new ideas for good tests and fixes.

+
+
+

Directory Layout

+
+
./docs/
+

The documentation for the test suite mostly lives here, the exception being +readme.txt files that are better off living near what they concern.

+

For a definition of terms used here, see the Definitions / Glossary section +of ./docs/AutomaticTestingRevamp.txt / ./docs/AutomaticTestingRevamp.html.

+
+
./testdriver/
+

Python module implementing the base test drivers and supporting stuff. +The base test driver implementation is found in ./testdriver/base.py while +the VBox centric specialization is in ./testdriver/vbox.py. Various VBox +API wrappers that makes things easier to use and glosses over a lot of API +version differences that live in ./testdriver/vboxwrappers.py.

+

Test VM collections are often managed thru ./testdriver/vboxtestvms.py, but +doesn't necessarily have to be, it's up to the individual test driver.

+

For logging, reporting result, uploading useful files and such we have a +reporter singleton sub-package, ./testdriver/reporter.py. It implements +both local (for local testing) and remote (for testboxes + test manager) +reporting.

+

There is also a VBoxTXS client implementation in txsclient.py and a stacked +test driver for installing VBox (vboxinstaller.py). Most test drivers will +use the TXS client indirectly thru vbox.py methods. The installer driver +is a special trick for the testbox+testmanager setup.

+
+
./tests/
+
The python scripts driving the tests. These are organized by what they +test and are all derived from the base classes in ./testdriver (mostly from +vbox.py of course). Most tests use one or more VMs from a standard set of +preconfigured VMs defined by ./testdriver/vboxtestvms.py (mentioned above), +though the installation tests used prepared ISOs and floppy images.
+
./vms/
+
Text documents describing the preconfigured test VMs defined by +./testdrive/vboxtestvms.py. This will also contain description of how to +prepare installation ISOs when we get around to it (soon).
+
./utils/
+

Test utilities and lower level test programs, compiled from C, C++ and +Assembly mostly. Generally available for both host and guest, i.e. in the +zip and on the VBoxValidationKit.iso respectively.

+

The Test eXecution Service (VBoxTXS) found in ./utils/TestExecServ is one +of the more important utilities. It implements a remote execution service +for running programs/tests inside VMs and on other test boxes. See +./utils/TestExecServ/vboxtxs-readme.txt for more details.

+

A simple network bandwidth and latency test program can be found in +./utils/network/NetPerf.cpp.

+
+
./bootsectors/
+

Boot sector test environment. This allows creating floppy images in +assembly that tests specific CPU or device behavior. Most tests can be +put on a USB stick, floppy or similar and booted up on real hardware for +comparison. All floppy images can be used for manual testing by developers +and most will be used by test drivers (./tests//td.py) sooner or later.

+

The boot sector environment is heavily bound to yasm and it's ability to +link binary images for single assembly input units. There is a "library" +of standard initialization code and runtime code, which include switch to +all (well V8086 mode is still missing, but we'll get that done eventually) +processor modes and paging modes. The image specific code is split into +init/driver code and test template, the latter can be instantiated for each +process execution+paging mode.

+
+
./common/
+
Python package containing common python code.
+
./testboxscript/
+
The testbox script. This is installed on testboxes used for automatic +testing with the testmanager.
+
./testmanager/
+
The VirtualBox Test Manager (server side code). This is written in Python +and currently uses postgresql as database backend for no particular reason +other than that it was already installed on the server the test manager was +going to run on. It's relatively generic, though there are of course +things in there that are of more use when testing VirtualBox than other +things. A more detailed account (though perhaps a little dated) of the +test manager can be found in ./docs/AutomaticTestingRevamp.txt and +./docs/AutomaticTestingRevamp.html.
+
./testanalysis/
+
A start a local test result analysis, comparing network test output. We'll +probably be picking this up again later.
+
./snippets/
+
Various code snippets that may be turned into real tests at some point.
+
+ +++ + + + + + +
Status:$Id: VBoxValidationKitReadMe.html $
Copyright:Copyright (C) 2010-2023 Oracle Corporation.
+
+
+ + diff --git a/src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.txt b/src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.txt new file mode 100644 index 00000000..d1c7de60 --- /dev/null +++ b/src/VBox/ValidationKit/docs/VBoxValidationKitReadMe.txt @@ -0,0 +1,113 @@ + +The VirtualBox Validation Kit +============================= + + +Introduction +------------ + +The VirtualBox Validation Kit is our new public tool for doing automated +testing of VirtualBox. We are continually working on adding new features +and guest operating systems to our battery of tests. + +We warmly welcome contributions, new ideas for good tests and fixes. + + +Directory Layout +---------------- + +./docs/ + The documentation for the test suite mostly lives here, the exception being + readme.txt files that are better off living near what they concern. + + For a definition of terms used here, see the Definitions / Glossary section + of ./docs/AutomaticTestingRevamp.txt / ./docs/AutomaticTestingRevamp.html. + +./testdriver/ + Python module implementing the base test drivers and supporting stuff. + The base test driver implementation is found in ./testdriver/base.py while + the VBox centric specialization is in ./testdriver/vbox.py. Various VBox + API wrappers that makes things easier to use and glosses over a lot of API + version differences that live in ./testdriver/vboxwrappers.py. + + Test VM collections are often managed thru ./testdriver/vboxtestvms.py, but + doesn't necessarily have to be, it's up to the individual test driver. + + For logging, reporting result, uploading useful files and such we have a + reporter singleton sub-package, ./testdriver/reporter.py. It implements + both local (for local testing) and remote (for testboxes + test manager) + reporting. + + There is also a VBoxTXS client implementation in txsclient.py and a stacked + test driver for installing VBox (vboxinstaller.py). Most test drivers will + use the TXS client indirectly thru vbox.py methods. The installer driver + is a special trick for the testbox+testmanager setup. + +./tests/ + The python scripts driving the tests. These are organized by what they + test and are all derived from the base classes in ./testdriver (mostly from + vbox.py of course). Most tests use one or more VMs from a standard set of + preconfigured VMs defined by ./testdriver/vboxtestvms.py (mentioned above), + though the installation tests used prepared ISOs and floppy images. + +./vms/ + Text documents describing the preconfigured test VMs defined by + ./testdrive/vboxtestvms.py. This will also contain description of how to + prepare installation ISOs when we get around to it (soon). + +./utils/ + Test utilities and lower level test programs, compiled from C, C++ and + Assembly mostly. Generally available for both host and guest, i.e. in the + zip and on the VBoxValidationKit.iso respectively. + + The Test eXecution Service (VBoxTXS) found in ./utils/TestExecServ is one + of the more important utilities. It implements a remote execution service + for running programs/tests inside VMs and on other test boxes. See + ./utils/TestExecServ/vboxtxs-readme.txt for more details. + + A simple network bandwidth and latency test program can be found in + ./utils/network/NetPerf.cpp. + +./bootsectors/ + Boot sector test environment. This allows creating floppy images in + assembly that tests specific CPU or device behavior. Most tests can be + put on a USB stick, floppy or similar and booted up on real hardware for + comparison. All floppy images can be used for manual testing by developers + and most will be used by test drivers (./tests/*/td*.py) sooner or later. + + The boot sector environment is heavily bound to yasm and it's ability to + link binary images for single assembly input units. There is a "library" + of standard initialization code and runtime code, which include switch to + all (well V8086 mode is still missing, but we'll get that done eventually) + processor modes and paging modes. The image specific code is split into + init/driver code and test template, the latter can be instantiated for each + process execution+paging mode. + +./common/ + Python package containing common python code. + +./testboxscript/ + The testbox script. This is installed on testboxes used for automatic + testing with the testmanager. + +./testmanager/ + The VirtualBox Test Manager (server side code). This is written in Python + and currently uses postgresql as database backend for no particular reason + other than that it was already installed on the server the test manager was + going to run on. It's relatively generic, though there are of course + things in there that are of more use when testing VirtualBox than other + things. A more detailed account (though perhaps a little dated) of the + test manager can be found in ./docs/AutomaticTestingRevamp.txt and + ./docs/AutomaticTestingRevamp.html. + +./testanalysis/ + A start a local test result analysis, comparing network test output. We'll + probably be picking this up again later. + +./snippets/ + Various code snippets that may be turned into real tests at some point. + + + +:Status: $Id: VBoxValidationKitReadMe.txt $ +:Copyright: Copyright (C) 2010-2023 Oracle Corporation. diff --git a/src/VBox/ValidationKit/docs/WindbgPython.txt b/src/VBox/ValidationKit/docs/WindbgPython.txt new file mode 100644 index 00000000..198ec917 --- /dev/null +++ b/src/VBox/ValidationKit/docs/WindbgPython.txt @@ -0,0 +1,10 @@ +$Id: WindbgPython.txt $ + +Just a couple of useful windbg commands: + +Show python filenames + frame line number (not statement) up the call stack: +!for_each_frame ".block { dt python27!_frame qwo(!f) f_lineno; da qwo(qwo(qwo(!f)+0x20) + 50) + 20 } " + +Same, alternative version: +!for_each_frame .if ( $spat("${@#FunctionName}","*PyEval_EvalFrameEx*") ) { .printf "python frame: line %d\npython frame: filename %ma\n", @@c++(f->f_lineno), qwo(qwo(qwo(!f)+0x20) + 50) + 20 } + diff --git a/src/VBox/ValidationKit/docs/testbox-maintenance.sh b/src/VBox/ValidationKit/docs/testbox-maintenance.sh new file mode 100755 index 00000000..cf2d329d --- /dev/null +++ b/src/VBox/ValidationKit/docs/testbox-maintenance.sh @@ -0,0 +1,409 @@ +#!/bin/bash +# $Id: testbox-maintenance.sh $ +## @file +# VirtualBox Validation Kit - testbox maintenance service +# + +# +# Copyright (C) 2006-2023 Oracle and/or its affiliates. +# +# This file is part of VirtualBox base platform packages, as +# available from https://www.virtualbox.org. +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation, in version 3 of the +# License. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, see . +# +# The contents of this file may alternatively be used under the terms +# of the Common Development and Distribution License Version 1.0 +# (CDDL), a copy of it is provided in the "COPYING.CDDL" file included +# in the VirtualBox distribution, in which case the provisions of the +# CDDL are applicable instead of those of the GPL. +# +# You may elect to license modified versions of this file under the +# terms and conditions of either the GPL or the CDDL or both. +# +# SPDX-License-Identifier: GPL-3.0-only OR CDDL-1.0 +# + + +# +# Global Variables (config first). +# +MY_REBOOT_WHEN_DONE="yes" +#MY_REBOOT_WHEN_DONE="" # enable this for debugging the script + +MY_TFTP_ROOT="/mnt/testbox-tftp" +MY_BACKUP_ROOT="/mnt/testbox-backup" +MY_BACKUP_MNT_TEST_FILE="/mnt/testbox-backup/testbox-backup" +MY_GLOBAL_LOG_FILE="${MY_BACKUP_ROOT}/maintenance.log" +MY_DD_BLOCK_SIZE=256K + +MY_IP="" +MY_BACKUP_DIR="" +MY_LOG_FILE="" +MY_PXELINUX_CFG_FILE="" + + +## +# Info message. +# +InfoMsg() +{ + echo $*; + if test -n "${MY_LOG_FILE}"; then + echo "`date -uIsec`: ${MY_IP}: info:" $* >> ${MY_LOG_FILE}; + fi +} + + +## +# Error message and reboot+exit. First argument is exit code. +# +ErrorMsgExit() +{ + MY_RET=$1 + shift + echo "testbox-maintenance.sh: error:" $* >&2; + # Append to the testbox log. + if test -n "${MY_LOG_FILE}"; then + echo "`date -uIsec`: ${MY_IP}: error:" $* >> "${MY_LOG_FILE}"; + fi + # Append to the global log. + if test -f "${MY_BACKUP_MNT_TEST_FILE}"; then + echo "`date -uIsec`: ${MY_IP}: error:" $* >> "${MY_GLOBAL_LOG_FILE}"; + fi + + # + # On error we normally wait 5min before rebooting to avoid repeating the + # same error too many time before the admin finds out. We choose NOT to + # remove the PXE config file here because (a) the admin might otherwise + # not notice something went wrong, (b) the system could easily be in a + # weird unbootable state, (c) the problem might be temporary. + # + # While debugging, we just exit here. + # + if test -n "${MY_REBOOT_WHEN_DONE}"; then + sleep 5m + echo "testbox-maintenance.sh: rebooting (after error)" >&2; + reboot + fi + exit ${MY_RET} +} + +# +# Try figure out the IP address of the box and the hostname from it again. +# +MY_IP=` hostname -I | cut -f1 -d' ' | head -1 ` +if test -z "${MY_IP}" -o `echo "${MY_IP}" | wc -w` -ne "1" -o "${MY_IP}" = "127.0.0.1"; then + ErrorMsgExit 10 "Failed to get a good IP! (MY_IP=${MY_IP})" +fi +MY_HOSTNAME=`getent hosts "${MY_IP}" | sed -s 's/[[:space:]][[:space:]]*/ /g' | cut -d' ' -f2 ` +if test -z "${MY_HOSTNAME}"; then + MY_HOSTNAME="unknown"; +fi + +# Derive the backup dir and log file name from it. +if test ! -f "${MY_BACKUP_MNT_TEST_FILE}"; then + mount "${MY_BACKUP_ROOT}" + if test ! -f "${MY_BACKUP_MNT_TEST_FILE}"; then + echo "Retrying mounting '${MY_BACKUP_ROOT}' in 15 seconds..." >&2 + sleep 15 + mount "${MY_BACKUP_ROOT}" + fi + if test ! -f "${MY_BACKUP_MNT_TEST_FILE}"; then + ErrorMsgExit 11 "Backup directory is not mounted." + fi +fi +MY_BACKUP_DIR="${MY_BACKUP_ROOT}/${MY_IP}" +MY_LOG_FILE="${MY_BACKUP_DIR}/maintenance.log" +mkdir -p "${MY_BACKUP_DIR}" +echo "================ `date -uIsec`: ${MY_IP}: ${MY_HOSTNAME} starts a new session ================" >> "${MY_LOG_FILE}" +echo "`date -uIsec`: ${MY_IP}: ${MY_HOSTNAME} says hi." >> "${MY_GLOBAL_LOG_FILE}" +InfoMsg "MY_IP=${MY_IP}" + +# +# Redirect stderr+stdout thru tee and to a log file on the server. +# +MY_OUTPUT_LOG_FILE="${MY_BACKUP_DIR}/maintenance-output.log" +echo "" >> "${MY_OUTPUT_LOG_FILE}" +echo "================ `date -uIsec`: ${MY_IP}: ${MY_HOSTNAME} starts a new session ================" >> "${MY_OUTPUT_LOG_FILE}" +exec &> >(tee -a "${MY_OUTPUT_LOG_FILE}") + +# +# Convert the IP address to PXELINUX hex format, then check that we've got +# a config file on the TFTP share that we later can remove. We consider it a +# fatal failure if we don't because we've probably got the wrong IP and we'll +# be stuck doing the same stuff over and over again. +# +MY_TMP=`echo "${MY_IP}" | sed -e 's/\./ /g' ` +MY_IP_HEX=`printf "%02X%02X%02X%02X" ${MY_TMP}` +InfoMsg "MY_IP_HEX=${MY_IP_HEX}" + +if test ! -f "${MY_TFTP_ROOT}/pxelinux.0"; then + mount "${MY_TFTP_ROOT}" + if test ! -f "${MY_TFTP_ROOT}/pxelinux.0"; then + echo "Retrying mounting '${MY_TFTP_ROOT}' in 15 seconds..." >&2 + sleep 15 + mount "${MY_BACKUP_ROOT}" + fi + if test ! -f "${MY_TFTP_ROOT}/pxelinux.0"; then + ErrorMsgExit 12 "TFTP share mounted or mixxing pxelinux.0 in the root." + fi +fi + +MY_PXELINUX_CFG_FILE="${MY_TFTP_ROOT}/pxelinux.cfg/${MY_IP_HEX}" +if test ! -f "${MY_PXELINUX_CFG_FILE}"; then + ErrorMsgExit 13 "No pxelinux.cfg file found (${MY_PXELINUX_CFG_FILE}) - wrong IP?" +fi + +# +# Dig the action out of from the kernel command line. +# +if test -n "${MY_REBOOT_WHEN_DONE}"; then + InfoMsg "/proc/cmdline: `cat /proc/cmdline`" + set `cat /proc/cmdline` +else + InfoMsg "Using script command line: $*" +fi +MY_ACTION=not-found +while test $# -ge 1; do + case "$1" in + testbox-action-*) + MY_ACTION="$1" + ;; + esac + shift +done +if test "${MY_ACTION}" = "not-found"; then + ErrorMsgExit 14 "No action given. Expected testbox-action-backup, testbox-action-backup-again, testbox-action-restore," \ + "testbox-action-refresh-info, or testbox-action-rescue on the kernel command line."; +fi + +# Validate and shorten the action. +case "${MY_ACTION}" in + testbox-action-backup) + MY_ACTION="backup"; + ;; + testbox-action-backup-again) + MY_ACTION="backup-again"; + ;; + testbox-action-restore) + MY_ACTION="restore"; + ;; + testbox-action-refresh-info) + MY_ACTION="refresh-info"; + ;; + testbox-action-rescue) + MY_ACTION="rescue"; + ;; + *) ErrorMsgExit 15 "Invalid action '${MY_ACTION}'"; + ;; +esac + +# Log the action in both logs. +echo "`date -uIsec`: ${MY_IP}: info: Executing '${MY_ACTION}'." >> "${MY_GLOBAL_LOG_FILE}"; + +# +# Generate missing info for this testbox if backing up. +# +MY_INFO_FILE="${MY_BACKUP_DIR}/testbox-info.txt" +if test '!' -f "${MY_INFO_FILE}" \ + -o "${MY_ACTION}" = "backup" \ + -o "${MY_ACTION}" = "backup-again" \ + -o "${MY_ACTION}" = "refresh-info" ; +then + echo "IP: ${MY_IP}" > ${MY_INFO_FILE}; + echo "HEX-IP: ${MY_IP_HEX}" >> ${MY_INFO_FILE}; + echo "Hostname: ${MY_HOSTNAME}" >> ${MY_INFO_FILE}; + echo "" >> ${MY_INFO_FILE}; + echo "**** cat /proc/cpuinfo ****" >> ${MY_INFO_FILE}; + echo "**** cat /proc/cpuinfo ****" >> ${MY_INFO_FILE}; + echo "**** cat /proc/cpuinfo ****" >> ${MY_INFO_FILE}; + cat /proc/cpuinfo >> ${MY_INFO_FILE}; + echo "" >> ${MY_INFO_FILE}; + echo "**** lspci -vvv ****" >> ${MY_INFO_FILE}; + echo "**** lspci -vvv ****" >> ${MY_INFO_FILE}; + echo "**** lspci -vvv ****" >> ${MY_INFO_FILE}; + lspci -vvv >> ${MY_INFO_FILE} 2>&1; + echo "" >> ${MY_INFO_FILE}; + echo "**** biosdecode ****" >> ${MY_INFO_FILE}; + echo "**** biosdecode ****" >> ${MY_INFO_FILE}; + echo "**** biosdecode ****" >> ${MY_INFO_FILE}; + biosdecode >> ${MY_INFO_FILE} 2>&1; + echo "" >> ${MY_INFO_FILE}; + echo "**** dmidecode ****" >> ${MY_INFO_FILE}; + echo "**** dmidecode ****" >> ${MY_INFO_FILE}; + echo "**** dmidecode ****" >> ${MY_INFO_FILE}; + dmidecode >> ${MY_INFO_FILE} 2>&1; + echo "" >> ${MY_INFO_FILE}; + echo "**** fdisk -l ****" >> ${MY_INFO_FILE}; + echo "**** fdisk -l ****" >> ${MY_INFO_FILE}; + echo "**** fdisk -l ****" >> ${MY_INFO_FILE}; + fdisk -l >> ${MY_INFO_FILE} 2>&1; + echo "" >> ${MY_INFO_FILE}; + echo "**** dmesg ****" >> ${MY_INFO_FILE}; + echo "**** dmesg ****" >> ${MY_INFO_FILE}; + echo "**** dmesg ****" >> ${MY_INFO_FILE}; + dmesg >> ${MY_INFO_FILE} 2>&1; + + # + # Get the raw ACPI tables and whatnot since we can. Use zip as tar will + # zero pad virtual files due to wrong misleading size returned by stat (4K). + # + # Note! /sys/firmware/dmi/entries/15-0/system_event_log/raw_event_log has been + # see causing fatal I/O errors, so skip all raw_event_log files. + # + zip -qr9 "${MY_BACKUP_DIR}/testbox-info.zip" \ + /proc/cpuinfo \ + /sys/firmware/ \ + -x "*/raw_event_log" +fi + +if test '!' -f "${MY_BACKUP_DIR}/${MY_HOSTNAME}" -a "${MY_HOSTNAME}" != "unknown"; then + echo "${MY_HOSTNAME}" > "${MY_BACKUP_DIR}/${MY_HOSTNAME}" +fi + +if test '!' -f "${MY_BACKUP_DIR}/${MY_IP_HEX}"; then + echo "${MY_IP}" > "${MY_BACKUP_DIR}/${MY_IP_HEX}" +fi + +# +# Assemble a list of block devices using /sys/block/* and some filtering. +# +if test -f "${MY_BACKUP_DIR}/disk-devices.lst"; then + MY_BLOCK_DEVS=`cat ${MY_BACKUP_DIR}/disk-devices.lst \ + | sed -e 's/[[:space:]][::space::]]*/ /g' -e 's/^[[:space:]]*//' -e 's/[[:space:]]*$//' `; + if test -z "${MY_BLOCK_DEVS}"; then + ErrorMsgExit 17 "No block devices found via sys/block." + fi + InfoMsg "disk-device.lst: MY_BLOCK_DEVS=${MY_BLOCK_DEVS}"; +else + MY_BLOCK_DEVS=""; + for MY_DEV in `ls /sys/block`; do + case "${MY_DEV}" in + [sh]d*) + MY_BLOCK_DEVS="${MY_BLOCK_DEVS} ${MY_DEV}" + ;; + *) InfoMsg "Ignoring /sys/block/${MY_DEV}"; + ;; + esac + done + if test -z "${MY_BLOCK_DEVS}"; then + ErrorMsgExit 17 "No block devices found via /sys/block." + fi + InfoMsg "/sys/block: MY_BLOCK_DEVS=${MY_BLOCK_DEVS}"; +fi + +# +# Take action +# +case "${MY_ACTION}" in + # + # Create a backup. The 'backup' action refuses to overwrite an + # existing backup, but is otherwise identical to 'backup-again'. + # + backup|backup-again) + for MY_DEV in ${MY_BLOCK_DEVS}; do + MY_DST="${MY_BACKUP_DIR}/${MY_DEV}.gz" + if test -f "${MY_DST}"; then + if test "${MY_ACTION}" != 'backup-again'; then + ErrorMsgExit 18 "${MY_DST} already exists" + fi + InfoMsg "${MY_DST} already exists" + fi + done + + # Do the backing up. + for MY_DEV in ${MY_BLOCK_DEVS}; do + MY_SRC="/dev/${MY_DEV}" + MY_DST="${MY_BACKUP_DIR}/${MY_DEV}.gz" + if test -f "${MY_DST}"; then + mv -f "${MY_DST}" "${MY_DST}.old"; + fi + if test -b "${MY_SRC}"; then + InfoMsg "Backing up ${MY_SRC} to ${MY_DST}..."; + dd if="${MY_SRC}" bs=${MY_DD_BLOCK_SIZE} | gzip -c > "${MY_DST}"; + MY_RCS=("${PIPESTATUS[@]}"); + if test "${MY_RCS[0]}" -eq 0 -a "${MY_RCS[1]}" -eq 0; then + InfoMsg "Successfully backed up ${MY_SRC} to ${MY_DST}"; + else + rm -f "${MY_DST}"; + ErrorMsgExit 19 "There was a problem backing up ${MY_SRC} to ${MY_DST}: dd => ${MY_RCS[0]}; gzip => ${MY_RCS[1]}"; + fi + else + InfoMsg "Skipping ${MY_SRC} as it either doesn't exist or isn't a block device"; + fi + done + ;; + + # + # Restore existing. + # + restore) + for MY_DEV in ${MY_BLOCK_DEVS}; do + MY_SRC="${MY_BACKUP_DIR}/${MY_DEV}.gz" + MY_DST="/dev/${MY_DEV}" + if test -b "${MY_DST}"; then + if test -f "${MY_SRC}"; then + InfoMsg "Restoring ${MY_SRC} onto ${MY_DST}..."; + gunzip -c "${MY_SRC}" | dd of="${MY_DST}" bs=${MY_DD_BLOCK_SIZE} iflag=fullblock; + MY_RCS=("${PIPESTATUS[@]}"); + if test ${MY_RCS[0]} -eq 0 -a ${MY_RCS[1]} -eq 0; then + InfoMsg "Successfully restored ${MY_SRC} onto ${MY_DST}"; + else + ErrorMsgExit 20 "There was a problem restoring ${MY_SRC} onto ${MY_DST}: dd => ${MY_RCS[1]}; gunzip => ${MY_RCS[0]}"; + fi + else + InfoMsg "Skipping ${MY_DST} because ${MY_SRC} does not exist."; + fi + else + InfoMsg "Skipping ${MY_DST} as it either doesn't exist or isn't a block device."; + fi + done + ;; + + # + # Nothing else to do for refresh-info. + # + refresh-info) + ;; + + # + # For the rescue action, we just quit without removing the PXE config or + # rebooting the box. The admin will do that once the system has been rescued. + # + rescue) + InfoMsg "rescue: exiting. Admin must remove PXE config and reboot manually when done." + exit 0; + ;; + + *) ErrorMsgExit 98 "Huh? MY_ACTION='${MY_ACTION}'" + ;; +esac + +# +# If we get here, remove the PXE config and reboot immediately. +# +InfoMsg "'${MY_ACTION}' - done"; +if test -n "${MY_REBOOT_WHEN_DONE}"; then + sync + if rm -f "${MY_PXELINUX_CFG_FILE}"; then + InfoMsg "removed ${MY_PXELINUX_CFG_FILE}"; + else + ErrorMsgExit 99 "failed to remove ${MY_PXELINUX_CFG_FILE}"; + fi + sync + InfoMsg "rebooting"; + reboot +fi +exit 0 diff --git a/src/VBox/ValidationKit/docs/testbox-pxe-conf.sh b/src/VBox/ValidationKit/docs/testbox-pxe-conf.sh new file mode 100755 index 00000000..d1f28b05 --- /dev/null +++ b/src/VBox/ValidationKit/docs/testbox-pxe-conf.sh @@ -0,0 +1,162 @@ +#!/bin/bash +# $Id: testbox-pxe-conf.sh $ +## @file +# VirtualBox Validation Kit - testbox pxe config emitter. +# + +# +# Copyright (C) 2006-2023 Oracle and/or its affiliates. +# +# This file is part of VirtualBox base platform packages, as +# available from https://www.virtualbox.org. +# +# This program is free software; you can redistribute it and/or +# modify it under the terms of the GNU General Public License +# as published by the Free Software Foundation, in version 3 of the +# License. +# +# This program is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, see . +# +# The contents of this file may alternatively be used under the terms +# of the Common Development and Distribution License Version 1.0 +# (CDDL), a copy of it is provided in the "COPYING.CDDL" file included +# in the VirtualBox distribution, in which case the provisions of the +# CDDL are applicable instead of those of the GPL. +# +# You may elect to license modified versions of this file under the +# terms and conditions of either the GPL or the CDDL or both. +# +# SPDX-License-Identifier: GPL-3.0-only OR CDDL-1.0 +# + + +# +# Global Variables (config first). +# +MY_NFS_SERVER_IP="10.165.98.101" +MY_GATEWAY_IP="10.165.98.1" +MY_NETMASK="255.255.254.0" +MY_ETH_DEV="eth0" +MY_AUTO_CFG="none" + +# options +MY_PXELINUX_CFG_DIR="/mnt/testbox-tftp/pxelinux.cfg" +MY_ACTION="" +MY_IP="" +MY_IP_HEX="" + +# +# Parse arguments. +# +while test "$#" -ge 1; do + MY_ARG=$1 + shift + case "${MY_ARG}" in + -c|--cfg-dir) + MY_PXELINUX_CFG_DIR="$1"; + shift; + if test -z "${MY_PXELINUX_CFG_DIR}"; then + echo "syntax error: Empty pxeclient.cfg path." >&2; + exit 2; + fi + ;; + + -h|--help) + echo "usage: testbox-pxe-conf.sh: [-c /mnt/testbox-tftp/pxelinux.cfg] "; + echo "Actions: backup, backup-again, restore, refresh-info, rescue"; + exit 0; + ;; + -*) + echo "syntax error: Invalid option: ${MY_ARG}" >&2; + exit 2; + ;; + + *) if test -z "$MY_ARG"; then + echo "syntax error: Empty argument" >&2; + exit 2; + fi + if test -z "${MY_IP}"; then + # Split up the IP if possible, if not do gethostbyname on the argument. + MY_TMP=`echo "${MY_ARG}" | sed -e 's/\./ /g'` + if test `echo "${MY_TMP}" | wc -w` -ne 4 \ + || ! printf "%02X%02X%02X%02X" ${MY_TMP} > /dev/null 2>&1; then + MY_TMP2=`getent hosts "${MY_ARG}" | head -1 | cut -d' ' -f1`; + MY_TMP=`echo "${MY_TMP2}" | sed -e 's/\./ /g'` + if test `echo "${MY_TMP}" | wc -w` -eq 4 \ + && printf "%02X%02X%02X%02X" ${MY_TMP} > /dev/null 2>&1; then + echo "info: resolved '${MY_ARG}' as '${MY_TMP2}'"; + MY_ARG="${MY_TMP2}"; + else + echo "syntax error: Invalid IP: ${MY_ARG}" >&2; + exit 2; + fi + fi + MY_IP_HEX=`printf "%02X%02X%02X%02X" ${MY_TMP}`; + MY_IP="${MY_ARG}"; + else + if test -z "${MY_ACTION}"; then + case "${MY_ARG}" in + backup|backup-again|restore|refresh-info|rescue) + MY_ACTION="${MY_ARG}"; + ;; + *) + echo "syntax error: Invalid action: ${MY_ARG}" >&2; + exit 2; + ;; + esac + else + echo "syntax error: Too many arguments" >&2; + exit 2; + fi + fi + ;; + esac +done + +if test -z "${MY_ACTION}"; then + echo "syntax error: Insufficient arguments" >&2; + exit 2; +fi +if test ! -d "${MY_PXELINUX_CFG_DIR}"; then + echo "error: pxeclient.cfg path does not point to a directory: ${MY_PXELINUX_CFG_DIR}" >&2; + exit 1; +fi +if test ! -f "${MY_PXELINUX_CFG_DIR}/default"; then + echo "error: pxeclient.cfg path does contain a 'default' file: ${MY_PXELINUX_CFG_DIR}" >&2; + exit 1; +fi + + +# +# Produce the file. +# Using echo here so we can split up the APPEND line more easily. +# +MY_CFG_FILE="${MY_PXELINUX_CFG_DIR}/${MY_IP_HEX}" +set +e +echo "PATH bios" > "${MY_CFG_FILE}"; +echo "DEFAULT maintenance" >> "${MY_CFG_FILE}"; +echo "LABEL maintenance" >> "${MY_CFG_FILE}"; +echo " MENU LABEL Maintenance (NFS)" >> "${MY_CFG_FILE}"; +echo " KERNEL maintenance-boot/vmlinuz-3.16.0-4-amd64" >> "${MY_CFG_FILE}"; +echo -n " APPEND initrd=maintenance-boot/initrd.img-3.16.0-4-amd64 testbox-action-${MY_ACTION}" >> "${MY_CFG_FILE}"; +echo -n " ro aufs=tmpfs boot=nfs root=/dev/nfs" >> "${MY_CFG_FILE}"; +echo -n " nfsroot=${MY_NFS_SERVER_IP}:/export/testbox-nfsroot,ro,tcp" >> "${MY_CFG_FILE}"; +echo -n " nfsvers=3 nfsrootdebug" >> "${MY_CFG_FILE}"; +if test "${MY_AUTO_CFG}" = "none"; then + # Note! Only 6 arguments to ip! Userland ipconfig utility barfs if autoconf and dns options are given. + echo -n " ip=${MY_IP}:${MY_NFS_SERVER_IP}:${MY_GATEWAY_IP}:${MY_NETMASK}:maintenance:${MY_ETH_DEV}" >> "${MY_CFG_FILE}"; +else + echo -n " ip=${MY_AUTO_CFG}" >> "${MY_CFG_FILE}"; +fi +echo "" >> "${MY_CFG_FILE}"; +echo "LABEL local-boot" >> "${MY_CFG_FILE}"; +echo "LOCALBOOT" >> "${MY_CFG_FILE}"; +echo "Successfully generated '${MY_CFG_FILE}'." +exit 0; + diff --git a/src/VBox/ValidationKit/docs/valkit.txt b/src/VBox/ValidationKit/docs/valkit.txt new file mode 100644 index 00000000..9e94eff5 --- /dev/null +++ b/src/VBox/ValidationKit/docs/valkit.txt @@ -0,0 +1 @@ +The VirtualBox ValidationKit ISO. -- cgit v1.2.3