$Id: TODO,v 1.171 2003/10/10 05:41:56 edwinh Exp $
$Name: v1_2_1 $

Flexbackup to-do list
Some of these might not happen

Cleanup/fix:
 - Consolidate use_file, use_pipe, etc to single string var.
 - Fix remote tape drive + mt uname conditionals are local
 - Clean up more $main:: vars, split into types or refactor code
 - When level > 9 and rm old logs/stamps, order is printed wrong (cosmetic nit)
 - If comp_log is changed, we don't nuke old logs that normally would be
   overwritten with ourselves if comp_log stayed the same. (cosmetic nit)
 - Fix afio w/null (FreeBSD printf can't spit out null char)
 - Multiple backups launched within the same second clobber each other's
    logs/filenames
 - rpm-delta mode - prelink messages clutter stderr
 - Whole class of cross-platform command switch stuff:
    - Use dump combined flags (for older dumps that want abc not -a -b -c)
    - Not all finds have the -fstype flag (from $traverse_fs)
    - Use lowest common denominator flags? Sort of what's tried now w/ cpio
    (This is why pax exists...)

Misc:
 - Do .debs?
 - Try against torture-test list:
     http://ftp.berlios.de/pub/star/testscripts/zwicky/testdump.doc.html
 - Update mt manpages collection I keep in cvs
 - Pondering moving everything to savannah.  Sourceforge is annoying me.
   mail list archives hardly work, and I hate the file release system thing
   being totally un-automatable.

Features to add:

  - Remote directores for to-disk backups

  - Remote device via Net::FTP ?

  - Handle multi-volume backups (original idea w/ 'multibuf' never panned out)

      Found "multivol" program.
      Ack.  still not workable.  need edits to multivol...

      mbuffer can do this?  Bah. Won't work over remote, needs terminal
      attached.  Testing hooks in now for local.

  - Make package delta work with debian pkgs or others

  - A shorter summary status that's better than doing
         egrep "(error|of set|Backup of|written)"
    of logfile.  Needs to handle all configurations....

 - CD-ROM burning (or just .iso type?) for full backups? any level?
      just act like to-file archives, and take those files & write to CD?
      "iso" type with a dirtree copy?  wasted space this way.
      be able to split archives into 650MB chunks?
      lots of questions

 - If we detect an error and are using tapes, do something intelligent
     to reposition tape, add another filemark, etc to get tape to known state
     and avoid the maybe-error in the index & what 'nextfile' is on the tape

 - Log tape sizes (output of buffer or dd or mt tell can be used?)

 - Use AppConfig module or something for config file, so it doesn't have to
   be in perl syntax (I don't mind perl syntax at all, not sure how joe
   user feels)

 - smbtar?
     doable - use fs spec of something like user:pass@host/share:dir
     smbtar has "files newer than" option, test
     don't want password echoed
     no exclude from find
     top level of shares only?

 - Make -extract -files for afio/cpio/zip recurse dirs
      dump/tar already to this. Actually just check that all act the same.

 - Interactive restore-like shell for types besides dump
      see notes below

 - Destdir option for compare / extract

 - Umask spec for on-disk archives/logs/stamps/index.  check tmpfiles too.

 - Spiff up the contributed CGI program?

 - Autoloader/mtx support? Need somonewho has one to do this

 - Do we want to count on perl (or flexbackup for that matter) being
     installed on remote machines?  We could do some built-in replacements
     for the 'find' for instance and make things more standardized. although
     keeping it simply built out of standard commandline utils is a big plus
     as well...

 - Make per-host path overrides possible

 - Make -set or -dir able to be given multiple times on the commandline
   (then what do we call the log file?

 - Stor lists of files backed up in a on-disk db (to help find what to
     restore from?). Actually now you can just 'zgrep filename
    /var/log/flexbackup/*' and that works pretty well.

 - Extract options to not overwrite existing files

 - Store the index db on the tapes?
      But we don't know the size ahead of time unless hard-limited resperved size.
      Updating db at the beginning of the tape will add tape-travel overhead for every backup.
      rewind->write loses the rest of the contents past the first file?

 - gnuplot graphs of backup size/duration vs time (parse logfiles)

 - Encryption with gnupg?

 - Make file-based backups able to use split for smaller archives?

 - Along the same lines of "-toc all" there should be facilities in
   flexbackup to query when the last level N backup of a filesystem was done and
   what tape it was on.
     -showindex /u -level 0
   would print you out when the last level 0 backup was done and what the key
   index was.


-------------------------------------------------------------

Notes for cloning restore shell for afio/tar/cpio types)
Use perl readline module
(Maybe a seperate program or module that runs off a list fed to it)

1) do a listing
2) parse & put into data structures
3) mt bsf 1
4) then give a shell w/ all commands from regular dump
5) mark things for extraction
6) reopen archive & extract that list

Help from restore to jog my brain...

        ls [arg] - list directory
        cd arg - change directory
        pwd - print current directory
        add [arg] - add `arg' to list of files to be extracted
        delete [arg] - delete `arg' from list of files to be extracted
        extract - extract requested files
        setmodes - set modes of requested directories
        quit - immediately exit program
        what - list dump header information
        verbose - toggle verbose flag (useful with ``ls'')
        help or `?' - print this list
If no `arg' is supplied, the current directory is used


Local Variables:
mode: flyspell
end: