                                	TDL  Hints 
 
 
Q1: How much memory do I have? 
 
A1: To check the available Heap memory of your system, simply select the 
    Memory Resource option under the Help menu.  It also indicates the amount 
    of Stack space available. 
 
Q2: What network parameter limitations exist? 
 
A2: Select the Network Resources option under the Param menu.  It will indicate 
    how many inputs the current version of TDL supports.  The maximum number of 
    network units allows, and the maximum number of training examples supported. 
    Beginning with TDL v. 1.1 the user will have control over TDL's memory organization 
    (be able to control how large the unit pool is, howm many examples can be loaded, etc.) 
 
Q3: My program runs out of memory? 
 
A3: TDL stores data as doubles (8 bytes of storage per floating point numbers). 
    It may very well happen that your system runs out of memory if large amounts of data  
    need to be stored.  This version of TDL was tested successfully on a 586 with 8 Meg of RAM on 
    a data set consisting of 29 inputs, 3 outputs and 12,000 examples. 
    You can control the amount of memory a TDL task requires by reducing the number of 
    genomes it uses (Under Param menu option; Standard Evolution). 
    If all fails, you can simply split up your data set into several separate files  
    and learn them incrementally (multi-shot learning).   
     
Q4a: What is incremental learning?  
 
A4a: TDL allows you to incrementally learn a collection of data set files.  Simply 
    supply the files in any order and perform multi-shot learning (do not RESET network 
    memory after training on each data set).   
     
    NOTE: As with any incremental learning system, learning a new data set may lead 
    TDL to forget previously learned information. You may have to present the different 
    data set files several times to achieve some stability. 
 
Q4b: How can I utilize TDL for incremental learning? 
 
A4b: Assume you have a large file of patterns.  Let's refer to this file as patterns.net.  Furthermore, assume 
this file is split up into several smaller files, such that the following holds:  
 
		 patterns.net = p1.net + p2.net + ... + pk.net. 
 
You can learn each of these files separately (WARNING: use multi-shot learning!) by placing them into a 
batch file (For more information see Help menu option on Batch files).  After TDL has learned each of 
these files you can test the TDL generated network on new examples.  To do this, you need to test the new 
patterns on each of the different contexts that the network was trained on.  For example, the files p1 
through pk represent different contexts. Let's say that the new patterns are contained in a file named 
patterns.tes.  Whenever you supply a file with extension *.tes or *.cls you need to specify the parameter 
Domain (also see Help menu for help on testing files).  This parameter indicates which context you are 
using during testing.  Returning to our previous example, you would repeatedly assign the contexts p1 
through pk to the testing file and and then apply that file to the network for testing.  Once the test file has 
been applied to the network for each context and the performance results have been collected, you can 
apply any decision weighting scheme to decide the overall outcome.  For example, if you have trained the 
network with k=10 files you may decide that at least 5 different contexts have to be in accordance with 
one another (provide the same output) for a result to hold.  Otherwise, you may conclude the network does 
not know the answer.  As mentioned earlier, the choice for combining the decisions of the different TDL 
subnetworks is left up to you.  
 
Q5: How can I speed up my program? 
 
A5: One option is to reduce the number of genomes used during learning.  You can also 
    change the maximum number of epochs a perceptron is trained.  Yet another option 
    is to decrease the maximum number of generations of the evolutionary training 
    process.  Finally, you can also reduce the maximum recession parameter.  The lower you  
    set this parameter the earlier the evolutionary process is aborted and a new network unit 
    trained. All these parameters can be set from within the Param menu option (under option 
    Standard Evolution). 
 
BUG REPORTS AND SUGGESTIONS: 
 
You are always welcome to submit suggestions on how to improve TDL, or supply suggestions 
on what to add to the TDL User Hints section.  Also, when describing a problem or a solution to a 
problem make sure you supply enough detail (attach data files, etc.) so that the problem you encountered 
can be re-constructed. 
 
Send bug reports and suggestions by e-mail to: zlxx69a@prodigy.com.   

