TODO NOW:
-- Make it faster
- - Certain classes of error will continually fail, so they should
- put in a different "seen" file which also skips them, unless
- we have some sort of gentle force
-
- Keep my sanity when upgrading 1000 installs
- - Distinguish between errors(?)
- - Custom merge algo: absolute php.ini symlinks to relative symlinks (this
- does not seem to have been a problem in practice)
- - Custom merge algo: check if it's got extra \r's in the file,
- and dos2unix it if it does, before performing the merge
- - `vos exa` in order to check what a person's quota is. We can
- figure out roughly how big the upgrade is going to be by
- doing a size comparison of the tars: `git pull` MUST NOT
- fail, otherwise things are left conflicted, and not easy to fix.
- - Prune -7 call errors and automatically reprocess them (with a
- strike out counter of 3)--this requires better error parsing
- - Snap-in conflict resolution teaching:
- 1. View the merge conflicts after doing a short run
- 2. Identify common merge conflicts
- 3. Copypaste the conflict markers to the application. Scrub
- user-specific data; this may mean removing the entire
- upper bit which is the user-version.
- 4. Specify which section to keep. /Usually/ this means
- punting the new change, but if the top was specified
- it means we get a little more flexibility. Try to
- minimize wildcarding: those things need to be put into
- subpatterns and then reconstituted into the output.
-
-- Distinguish from logging and reporting (so we can easily send mail
- to users)
- - Logs aren't actually useful, /because/ most operations are idempotent.
- Thus, scratch logfile and make our report files more useful: error.log
- needs error information; we don't care too much about machinability.
- All report files should be overwritten on the next run, since we like
- using --limit to incrementally increase the number of things we run. Note
- that if we add soft ignores, you /do/ lose information, so there needs
- to be some way to also have the soft ignore report a "cached error"
- - Report the identifier number at the beginning of all of the stdout logs
- - Don't really care about having the name in the logfile name, but
- have a lookup txt file
- - Figure out a way of collecting blacklist data from .scripts/blacklisted
- and aggregate it together
- - Failed migrations should be wired to have wizard commands in them
- automatically log to the relevant file. In addition, the seen file
- should get updated when one of them gets fixed.
- - Failed migration should report how many unmerged files there are
- (so we can auto-punt if it's over a threshold)
-
-- Let users use Wizard when ssh'ed into Scripts
- - Make single user mass-migrate work when not logged in as root
+ - Replace gaierror with a more descriptive name (this is a DNS error)
- Make the rest of the world use Wizard
- Make parallel-find.pl use `sudo -u username git describe --tags`
output summary charts when I increase specificity
- Summary script should do something intelligent when distinguishing
between old-style and new-style installs
+ - Report code in wizard/command/__init__.py is ugly as sin. Also,
+ the Report object should operate at a higher level of abstraction
+ so we don't have to manually increment fails. (in fact, that should
+ probably be called something different). The by-percent errors should
+ also be automated.
+ - Indents in upgrade.py are getting pretty ridiculous; more breaking
+ into functions is probably a good idea
+ - Move resolutions in mediawiki.py to a text file? (the parsing overhead
+ may not be worth it)
+ - Investigate QuotaParseErrors
+ - If a process is C-ced, it can result in a upgrade that has
+ an updated filesystem but not updated database. Make this more
+ resilient
+ - PHP end of file allows omitted semicolon, can result in parse error
+ if merge resolutions aren't careful.
- Other stuff
+ - Make single user mass-migrate work when not logged in as root
- Don't use the scripts heuristics unless we're on scripts with the
AFS patch. Check with `fs sysname`
- Make 'wizard summary' generate nice pretty graphs of installs by date
turbogears: NFC
wordpress: Multistage install process
-PHILOSOPHY ABOUT LOGGING
-
-Logging is most useful when performing a mass run. This
-includes things such as mass-migration as well as when running
-summary reports. An interesting property about mass-migration
-or mass-upgrade, however, is that if they fail, they are
-idempotent, so an individual case can be debugged simply running
-the single-install equivalent with --debug on. (This, indeed,
-may be easier to do than sifting through a logfile).
-
-It is a different story when you are running a summary report:
-you are primarily bound by your AFS cache and how quickly you can
-iterate through all of the autoinstalls. Checking if a file
-exists on a cold AFS cache may
-take several minutes to perform; on a hot cache the same report
-may take a mere 3 seconds. When you get to more computationally
-expensive calculations, however, even having a hot AFS cache
-is not enough to cut down your runtime.
-
-There are certain calculations that someone may want to be
-able to perform on manipulated data. As such, this data should
-be cached on disk, if the process for extracting this data takes
-a long time. Also, for usability sake, Wizard should generate
-the common case reports.
-
-Ensuring that machine parseable reports are made, and then making
-the machinery to reframe this data, increases complexity. Therefore,
-the recommendation is to assume that if you need to run iteratively,
-you'll have a hot AFS cache at your fingerprints, and if that's not
-fast enough, then cache the data.
-
COMMIT MESSAGE FIELDS:
Installed-by: username@hostname
NOTES:
-- It is not expected or required for update scripts to exist for all
+- It is not required nor expected for update scripts to exist for all
intervening versions that were present pre-migration; only for it
to work on the most recent migration.
also means that /mit/scripts/wizard/srv MUST NOT lose revs after
deployment.
-- Full fledged logging options. Namely:
- x all loggers (delay implementing this until we actually have debug stmts)
- - default is WARNING
- - debug => loglevel = DEBUG
- x stdout logger
- - default is WARNING (see below for exception)
- - verbose => loglevel = INFO
- x file logger (creates a dir and lots of little logfiles)
- - default is OFF
- - log-file => loglevel = INFO
-
OVERALL PLAN:
* Some parts of the infrastructure will not be touched, although I plan
1. Have the Git repository and working copy for the project on hand.
-/- wizard prepare-pristine --
-
-A 2. Checkout the pristine branch
-
-A 3. Remove all files from the working copy. Use `wipe-working-dir`
-
-A 4. Download the new tarball
+ 2. Checkout the pristine branch
-A 5. Extract the tarball over the working copy (`cp -R a/. b` works well,
- remember that the working copy is empty; this needs some intelligent
- input)
+ 3. Run wizard `prepare-pristine APP-VERSION`
-A 6. Check for empty directories and add stub files as necessary.
- Use `preserve-empty-dir`
+ 4. Checkout the master branch
-\---
-
- 7. Git add it all, and then commit as a new pristine version (v1.2.3)
-
- 8. Checkout the master branch
-
- 9. [FOR EXISTING REPOSITORIES]
+ 5. [FOR EXISTING REPOSITORIES]
Merge the pristine branch in. Resolve any conflicts that our
patches have with new changes. Do NOT let Git auto-commit it
with --no-commit (otherwise, you want to git commit --amend
[FOR NEW REPOSITORIES]
Check if any patches are needed to make the application work
- on Scripts (ideally, it shouldn't.
-
-/- wizard prepare-new --
+ on Scripts (ideally, it shouldn't.) Run
+ `wizard prepare-new` to setup common filesets for our repositories.
- Currently not used for anything besides parallel-find.pl, but
- we reserve the right to place files in here in the future.
-
-A mkdir .scripts
-A echo "Deny from all" > .scripts/.htaccess
-
-\---
-
- 10. Check if there are any special update procedures, and update
+ 6. Check if there are any special update procedures, and update
the wizard.app.APPNAME module accordingly (or create it, if
need be).
- 11. Run 'wizard prepare-config' on a scripts server while in a checkout
+ 7. Run 'wizard prepare-config' on a scripts server while in a checkout
of this newest version. This will prepare a new version of the
configuration file based on the application's latest installer.
Manually merge back in any custom changes we may have made.
the configuration files for user-specific gunk, and modify
wizard.app.APPNAME accordingly.
- 12. Commit your changes, and tag as v1.2.3-scripts (or scripts2, if
+ 8. Commit your changes, and tag as v1.2.3-scripts (or scripts2, if
you are amending an install without an upstream changes)
NOTE: These steps should be run on a scripts server
- 13. Test the new update procedure using our test scripts. See integration
+ 9. Test the new update procedure using our test scripts. See integration
tests for more information on how to do this.
http://scripts.mit.edu/wizard/testing.html#acceptance-tests
scripts-security-upd to get bits to do this. Make sure you remove
these bits when you're done.
-A 14. Run `wizard research appname`
+ 10. Run `wizard research appname`
which uses Git commands to check how many
working copies apply the change cleanly, and writes out a logfile
with the working copies that don't apply cleanly. It also tells
us about "corrupt" working copies, i.e. working copies that
have over a certain threshold of changes.
-A 15. Run `wizard mass-upgrade appname`, which applies the update to all working
- copies possible, and sends mail to users to whom the working copy
- did not apply cleanly.
+ 11. Run `wizard mass-upgrade appname`, which applies the update to all working
+ copies possible.
- 16. Run parallel-find.pl to update our inventory
+ 12. Run parallel-find.pl to update our inventory
* For mass importing into the repository, there are a few extra things:
- A .scripts directory, with the intent of holding Scripts specific files
if they become necessary.
- * .scripts/lock (generated) which locks an autoinstall during upgrade
-