relax Changelog

New in version 3.3.3

November 24th, 2014
  • Features:
  • Implemented the lib.geometry.vectors.vector_angle_atan2() relax library function. This is for calculating the inter-vector angle using the more numerically stable atan2() formula.
  • Implemented the lib.geometry.vectors.vector_angle_acos() relax library function. This is used to calculate the inter-vector angle using the arccos of the dot product formula. The function has been introduced into the relax library as the calculation is repeated throughout relax.
  • Expanded the basis sets for the align_tensor.matrix_angles user function to allow the correct inter-tensor angles to be calculated. This includes the standard inter-matrix angles via the arccos of the Euclidean inner product of the alignment matrices in rank-2, 3D form divided by the Frobenius norm of the matrices, irreducible spherical tensor 5D basis set {A-2, A-1, A0, A1, A2}, and the unitary 9D basis set {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} (all of which produce the same result).
  • Expanded the basis sets for the align_tensor.svd user function to allow the correct singular values and condition number to be calculated. This includes the irreducible spherical tensor 5D basis set {A-2, A-1, A0, A1, A2} and the unitary 9D basis set {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} (both of which produce the same result).
  • Added the angle_units and precision arguments to the align_tensor.matrix_angles user function to allow either degrees or radians to be output and the number of decimal points to be specified.
  • Added the precision argument to the align_tensor.svd user function to allow the number of decimal points for the singular values and condition number to be specified.
  • Updated the align_tensor.display user function to output the irreducible spherical harmonic weights. This is the alignment tensor in the {A-2, A-1, A0, A1, A2} notation.
  • Changes:
  • Basic Epydoc fix for the data_store.exp_info module.
  • Epydoc fix for the name_pipe() method of the relaxation dispersion auto-analysis for repeated data
  • Fixes for the HTML user manual compilation. The index.html file was not being created as the main page has changed from 'relax_user_manual.html' to 'The_relax_user_manual.html'.
  • Added a line to the release checklist document about updating the wiki release links. These are for the combined release notes pages at Relax releases, Relax release descriptions, Relax release metadata, Relax release features, Relax release changes, Relax release bugfixes, Relax release links.
  • Updates for the release announcement section of the release checklist document.
  • Created a system test to catch a rare relaxation data loading problem.
  • Created the Mf.test_dauvergne_protocol_sphere system test. This catches bug #22963: Using '@N*' to define the interatomic interactions for a model-free analysis fails when using non-backbone 15N spins.
  • Set more reasonable default values for the lib.structure.pdb_write functions atom() and hetatm(). The occupancy now defaults to 1.0 instead of , and the temperature factor to 0.0 instead of . This avoid painful errors when using these functions, as these arguments must be floating point numbers at all times, hence the default value of causes a TypeError.
  • Updated the PDB file in the test_suite/shared_data/model_free/sphere/ directory. The relax library is now being used to create the PDB file. Additional TER and CONECT records are now being created so the result is a more correct PDB file.
  • Converted all ATOM records to HETATM in the sphere.pdb file.
  • Renamed vector_angle() to vector_angle_normal() in the lib.geometry.vectors module. This is to standardise the naming as there are now the standard vector angle formula implemented as the vector_angle_acos() and vector_angle_atan2() functions.
  • Added 6 unit tests for the lib.geometry.vectors.vector_angle_acos() function. These are similar to those of the vector_angle_normal() function but unsigned angles are checked for.
  • Created 6 unit tests for the lib.geometry.vectors.vector_angle_atan2() function.
  • Created a script and log file to demonstrate differences between alignment tensor basis sets. This shows that the inter-tensor angles and condition numbers are dependent on the basis set used.
  • Improved the printouts from the align_tensor.svd user function by including the basis set text.
  • Updated the log file for comparing different alignment tensor basis sets for align_tensor.svd changes.
  • Implemented a new default basis set for the align_tensor.matrix_angles user function. This is uses standard definition of the inter-matrix angle using the Euclidean inner product of the two matrices divided by the product of the Frobenius norm of each matrix. As this is a linear map, it should produce the correct definition of inter-tensor angles.
  • Improvements to the description of the align_tensor.matrix_angles user function.
  • Updated the test_matrix_angles_identity() unit test for pipe_control.align_tensor.matrix_angles(). This is the test in the _prompt.test_align_tensor.Test_align_tensor module. The basis set has been set back to the now non-default value of 0, and the value checks have been converted from assertEqual() to assertAlmostEqual() to allow for small truncation errors.
  • Conversion of the basis_set argument for the align_tensor.matrix_angles user function. The argument is now a string that accepts the values of 'matrix', 'unitary 5D', and 'geometric 5D' to select between the different matrix angles techniques. This has been updated in the test suite as well.
  • Added a check for the values of the basis_set argument. This is to the align_tensor.matrix_angles user function backend.
  • Printout improvements clarifying the align_tensor.matrix_angles user function.
  • Conversion of the basis_set argument for the align_tensor.svd user function. The argument is now a string that accepts the values of 'unitary 9D', 'unitary 5D', and 'geometric 5D' to select between the different SVD matrices. This has been updated in the test suite as well.
  • Expanded the N_state_model.test_5_state_xz system test. This now covers the new 'unitary 9D' basis set for the align_tensor.svd user function and the new 'matrix' basis set for the align_tensor.matrix_angles user function.
  • Expansion of the align_tensor.matrix_angles user function. The new basis set 'unitary 9D' has been introduced. This creates vectors as {Sxx, Sxy, Sxz, Syx, Syy, Syz, Szx, Szy, Szz} and computes the inter-vector angles. These match the 'matrix' basis set whereby the Euclidean inner product divided by the Frobenius norms is used to calculate the inter-tensor angles. In addition, the user function documentation and printouts have been improved. And the backend code has been simplified.
  • Updated the script and log file for demonstrating differences between alignment tensor basis sets. This now handles the changes to the basis_set arguments used in the align_tensor.matrix_angles and align_tensor.svd user functions, and includes the new basis sets.
  • Added the irreducible tensor notation of {A-2, A-1, A0, A1, A2} to the alignment tensor object. This follows from the definition of Sass et al, J. Am. Chem. Soc. 1999, 121, 2047-2055, DOI: 10.1021/ja983887w. The equations of (2) were converted using Gaussian elimination to obtain a reduced row echelon form, so that the equations in terms of {A-2, A-1, A0, A1, A2} were derived. These have been coded into the alignment tensor object calc_Am2, calc_Am1, calc_A0, calc_A1 and calc_A2 methods respectively, and the values can be obtained by accessing the Am2, Am1, A0, A1, and A2 objects. To check that the implementation is correct, a unit test has been created to compare the calculated values with those determined using Pales.
  • Expanded the unit test of the alignment tensor {A-2, A-1, A0, A1, A2} parameters to cover all values.
  • Created functions in the relax library for calculating the inter-vector angle for complex vectors. This is in the lib.geometry.vectors module. The [function http://www.nmr-relax.com/api/3.3/lib.geometry.vectors-module.html#vector_angle_complex_conjugate vector_angle_complex_conjugate()] has been created to calculate the angle between two complex vectors. This uses the new auxiliary function complex_inner_product() to calculate .
  • Added the 'irreducible 5D' basis set option to the align_tensor.matrix_angles user function. This is for the inter-tensor vector angle for the irreducible 5D basis set {S-2, S-1, S0, S1, S2}. Its results match that of the standard tensor angle as well as the 'unitary 9D' basis sets.
  • Added the 'irreducible 5D' basis set option to the align_tensor.svd user function. This is for the inter-tensor vector angle for the irreducible 5D basis set {A-2, A-1, A0, A1, A2}. Its results match that of the 'unitary 9D' basis set.
  • Editing of the description for the 'irreducible 5D' alignment tensor basis set. This is for the align_tensor.matrix_angles and align_tensor.svd user functions. All Sm element have been converted to Am.
  • Editing of the description for the align_tensor.matrix_angles user function.
  • Editing of the align_tensor.svd user function description.
  • Updated the script and log file for demonstrating differences between alignment tensor basis sets. The 'irreducible 5D' basis set in now used for both the align_tensor.matrix_angles and align_tensor.svd user functions.
  • Fix for a spelling mistake in the align_tensor.matrix_angles user function printouts.
  • Small fix for the align_tensor.matrix_angles user function documentation.
  • Expanded the N_state_model.test_5_state_xz system test for more alignment tensor basis sets. The align_tensor.matrix_angles and align_tensor.svd user functions are now being called with the additional 'irreducible 5D', and 'unitary 9D' basis sets, to make sure these work correctly.
  • Created the Align_tensor.test_align_tensor_matrix_angles system test. This is to check the angles calculated by the align_tensor.matrix_angles user function. As there are no external references, this essentially fixes the angles to the currently calculated values to catch any accidental changes in the future.
  • Created the Align_tensor.test_align_tensor_svd system test. This is to check the angles calculated by the align_tensor.svd user function. As there are no external references, this essentially fixes the singular values and condition numbers to the currently calculated values to catch any accidental changes in the future.
  • Fixes for the proportions of the align_tensor.matrix_angles user function GUI wizard.
  • Expanded the 'irreducible 5D' text in the align_tensor.matrix_angles and align_tensor.svd user functions. This now explains that these are the coefficients for the spherical harmonic decomposition.
  • Improved the text for the irreducible tensor notation in the align_tensor.display user function.
  • Formatting fix for the magnetic susceptibility tensor part of the align_tensor.display user function.
  • More improvements for the align_tensor.matrix_angles user function description.
  • Epydoc docstring fixes and expansion for the lib.io.sort_filenames() function.
  • Epydoc docstring fixes for the lib.spectrum.nmrpipe module. This is for the API documentation. The show_apod_rmsd_to_file() and show_apod_rmsd_dir_to_files() function docstrings have both been modified.
  • Epydoc docstring fixes for the pipe_control.opendx.map() function. The fixes include whitespace and textwrapping changes.
  • Python 2.5 fix for the align_tensor.display user function. The new irreducible spherical tensor coefficient printout was failing as the float.real variable was introduced from Python 2.6 onwards.
  • Shifted the structure checks into their own module. This shifts the special check_structure Check object from pipe_control.structure.main into the new checks module. It allows the check to be performed by other modules in the pipe_control.structure package.
  • Added the missing_error keyword argument to the pipe_centre_of_mass() function. This is from the pipe_control.structure.mass module. The new keyword controls what happens with the absence of structural data. The pipe_control.structure.checks.check_structure() function is now being used to either throw a warning and return [0, 0, 0] or to raise a RelaxError.
  • Fix for the new unit tests - Python 2.5 floats do not have a 'real' property.
  • Bugfixes:
  • Fix for bug #22961, the failure of relaxation data loading with the message "IndexError: list index out of range". The bug was found by Julien Orts. It is triggered by loading relaxation data from a file containing spin name information and supplying the spin ID using the spin name to restrict data loading to a spin subset. To solve the problem, the pipe_control.relax_data.pack_data() function has been redesigned. Now the selection union concept of Chris MacRaild's selection object is being used by joining the spin ID constructed from the data file and the user supplied spin ID with '&', and using this to isolate the correct spin system.
  • Big Python 3 bug fix for the dep_check module for the detection of the NMRPipe showApod software. The showApod program was falsely detected as always not being present when using Python 3. This is because the output of the program was being tested using string comparisons. However the output from programs obtained from the subprocess module is no longer strings but rather byte-arrays in Python 3. Therefore the byte-array is not being converted to text if Python 3 is being used, allowing the showApod software to be detected.
  • Python 3 bug fix for the lib.spectrum.nmrpipe.show_apod_extract() function. The subprocess module output from the showApod program, or any software, is a byte array in Python 3 rather than text. This is now detected and the byte array converted to text before any processing.
  • Bug fix for the lib.structure.angles.angles_*() functions for odd increments. This affects the PDB representations of the diffusion tensor and frame order when the number of increments in the respective user functions is set to an odd number. It really only affects the frame_order.pdb_model user function, as the number of increments cannot be set in any of the other user functions (structure.create_diff_tensor_pdb, structure.create_rotor_pdb, structure.create_vector_dist, n_state_model.cone_pdb).

New in version 3.3.2 (November 14th, 2014)

  • Features:
  • Many improvements for the HTML version of the manual.
  • Improved sectioning printouts in the model-free dauvergne_protocol auto-analysis.
  • Significant improvements for the relax controller window.
  • All wizards and user functions in the relax GUI now have focus so that keyboard is active without requiring a mouse click.
  • The ESC key will now close the relax controller window and all user function windows.
  • The structure.load_spins user function can now load spins from multiple non-identical molecules and merge them into one molecule allowing missing atoms and differential atom numbering to be handled.
  • Improvements to the printouts for many user functions.
  • Changes:
  • Updated the minfx version in the release checklist document to version 1.0.11.
  • Updated the relax version in the release checklist document to be more modern.
  • Spelling fixes for the CHANGES file.
  • Updates for the release checklist document. This is mainly because the main release notes are now the relax wiki, for example for the current version at http://wiki.nmr-relax.com/Relax_3.3.1.
  • Spelling fixed throughout the CHANGES document.
  • Removed a few triple spaces in the CHANGES document.
  • Added periods to the end of all items in the CHANGES document.
  • Fix for an 'N/A' in the CHANGES document.
  • Converted a number of single spaces between sentences to double spaces in the CHANGES document.
  • More updates for the announcement section of the release checklist document.
  • The HTML version of the manual is now compiled with Unicode character support. It allows Greek symbols, for example, to be represented as text rather than LaTeX generated PNG images. This fixes titles and massively decreases the number of images required by the HTML pages.
  • Removal of many dual LaTeX and latex2html section titles in the manual. As the HTML manual is now compiled with Unicode support, the Greek characters in the titles are now supported. Therefore in the model-free and the values, gradients, and Hessians chapters, the dual LaTeX and latex2html section titles could be collapsed to the standard LaTeX section title. This will result in better formatting of the manual and its links.
  • Added instructions and a build script for creating a useful version of latex2html. This version is essential for building the HTML version of the manual. The build script downloads the Debian latex2html-2008 sources as well as all Debian patches for latex2html. It then applies a number of patches for fixing and improving the relax documentation. The program is then compiled and can be installed as the root user into /usr/local/.
  • Extended the number of words used in the HTML webpage file names. This is to hopefully prevent files from being overwritten by multiple files having the same name.
  • Added the write out of parameters and χ2 values, when creating a dx_map. Task #7860: When dx_map is issued, create a parameter file which maps parameters to χ2 value.
  • Created system test Relax_disp.test_dx_map_clustered_create_par_file, which must show that relax is not able to find the local minimum under clustered conditions. When creating the map, the map contain χ2 values, which are lower than the clustered fitted values. This should not be the case. Running a larger map with larger bounds and more increments, which should show that there exist a minimum in the minimisation space with a lower χ2 value. Bug #22754: The minimise.calculate() does not calculate χ2 value for clustered residues. Task #7860: When dx_map is issued, create a parameter file which maps parameters to χ2 value.
  • Renamed test scripts and files for producing surface χ2 plots.
  • Renamed sample scripts making surface maps.
  • Added scripts to make surface plots of spin independents parameters δω and Ra2.
  • Added example surface χ2 values for plots. Task #7826: Write an python class for the repeated analysis of dispersion data.
  • Added example save state for more surface plotting.
  • Added boolean argument to dx.map user function, to specify the creation of a parameter and associated χ2 values file. For very very special situations, the creation of this file is not desired.
  • Modified that structure of points in dx.map is always a list of numpy arrays with 3 values.
  • When issuing dx.map user function with points, implemented the writing out of parameter file, with associated calculated χ2 values.
  • Improved the feedback in the User_functions.test_structure_add_atom GUI test. It is now clearer what the input and output data is.
  • The devel_scripts/python_multiversion_test_suite.py script now runs relax with the --time flag. This is for quicker identification of failure points. It will also force the sys.stdout buffer to be flushed more often on Python 2.5 so that it does not appear as if the tests have frozen.
  • Added check to system test Relax_disp.test_cpmg_synthetic_dx_map_points for the creation of a matplotlib surface command plot file.
  • Added the write out of a matplotlib command file, to plot surfaces of a dx map. It uses the minimum χ2 value in the map space, to define surface definitions. It creates a X,Y; X,Z; Y,Z map, where the values in the missing dimension has been cut at the minimum χ2 value. For each map, it creates a projected 3d map of the parameters and the χ2 value, and a heat map for the contours. It also scatters the minimum χ2 value, the 4 smallest χ2 values, and maps any points in the point file, to a scatter point. Mapping the points from file to map points, is done by finding the shortest Euclidean distance in the space from the points to any map points.
  • Fix for testing the raise of expected errors in system tests. The system test will not be tested, if Python version is under version 2.7. Bug #22801: Failure of the relax test suite on Python 2.5.
  • Inserted a z_axis limit for the plotting of 2D surfaces in matplotlib.
  • Added better figure control of χ2 values on z-axis for surface plots.
  • Narrowed in dx_map in system test Relax_disp.test_dx_map_clustered_create_par_file. This is to illustrate the failure of relax finding the global minimum. It seems there is a shallow barrier, which relax failed to climb over, in order to find the minimum value.
  • Added the verbosity argument to the pipe_control.minimise.reset_min_stats() function. All of the minimisation code which calls this now send in their verbosity arguments. This allows the text "Resetting the minimisation statistics." to be suppressed.
  • Added the verbosity argument to the pipe_control.value.set() function. This is passed into the pipe_control.minimise.reset_min_stats() function so its printouts can be silenced.
  • The pipe_control.opendx space mapping code now calls the value.set() function with verbosity=0. This is to silence the very repetitive statistics resetting messages when executing the dx.map user function.
  • Added more checks to the determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730, model-free auto-analysis - relax stops and quits at the polate step. The following additional fatal conditions are now checked for: A file with the same name as the base model directory already exists; The base model directory is not readable; The base model directory is not writable. The last two could be caused by file system corruptions. In addition, the presence of the base model directory is checked for using os.path.isdir() rather than catching errors coming out of the os.listdir() function. These changes should make the analysis more robust in the presence of 'strangeness'.
  • Added an additional check to determine_rnd() of the dauvergne_protocol model-free auto-analysis. This is to try to catch bizarre situations such as bug #22730, model-free auto-analysis - relax stops and quits at the polate step. The additional check is that if the base model directory is not executable, a RelaxError is raised.
  • Added printouts to the determine_rnd() function of the dauvergne_protocol model-free auto-analysis. This is for better user feedback in the log files as to what is happening. It may help in debugging bug #22730: Model-free auto-analysis - relax stops and quits at the polate step.
  • Alphabetical ordering of imports in the dauvergne_protocol model-free auto-analysis.
  • Changed the model-free single spin optimisation title printouts. The specific_analyses.model_free.optimisation.spin_print() function has been deleted. It has instead been replaced by a call to lib.text.sectioning.subtitle(). This is to match the grid search setup title printouts and to differentiate these titles from those printed out by minfx being underlined by '~' characters.
  • Added extensive sectioning printouts to the dauvergne_protocol model-free auto-analysis. The lib.text.sectioning functions title() and subtitle() are now used to mark out all parts of the auto-analysis. This will allow for a much better understanding of the log files produced by this auto-analysis.
  • Complete redesign of the following of text in the relax controller window in the GUI. The current design for some reason no longer worked very often, and there would be many situations where the scrolling to follow the text output would stop and could never be recovered. Therefore this feature has been redesigned. In the LogCtrl element of the relax controller, which displays the relax output messages, the at_end class boolean variable has been introduced. It defaults to True. The following events will turn it off: Arrow keys, Home key, End key, Ctrl-Home key, Mouse button clicks, Mouse wheel scrolling, Window thumbtrack scrolling (the side scrollbar), finding text, the pop up menu 'Go to start', and Select all (menu or Ctrl-A). It will only be turned on in two cases: The pop up menu 'Go to end', and if the caret is on the final line (caused by Ctrl-End, Mouse wheel scrolling, Page Down, Down arrow, Window thumbtrack scrolling, etc.). Three new methods have been introduced to handle certain events: capture_mouse() for mouse button clicks, capture_mouse_wheel() for mouse wheel scrolling, and capture_scroll for window thumbtrack scrolling.
  • Improvements for selecting all text in the relax controller window. Selecting text using the pop up menu or [Ctrl-A] now shifted the caret to line 1 before selecting all text. This deactivates the following of the end of text, if active, as the text following feature causes the text selection to be lost.
  • Modified the behaviour of the relax controller window so that pressing escape closes the window. This involves setting the initial focus on the LogCtrl, and catching the ESC key press in the LogCtrl as well as all relax controller read only wx.Field elements and calling the parent controller handle_close() method.
  • Replaced the hardcoded integer keycodes in the relax controller with the wx variables. This is for the LogCtrl.capture_keys() handler method for dealing with key presses.
  • Improvement for all wizards and user functions in the relax GUI. The focus is now set on the currently displayed page of the wizard. This allows the keyboard to be active without requiring a mouse click. Now text can be instantly input into the first text control and the tab key can jump between elements. As the GUI user functions are wizards with a single page, this is a significant usability improvement for the GUI.
  • The ESC character now closes all wizards and user functions in the relax GUI. By using an accelerator table set to the entire wizard window to catch the ESC keyboard event, the ESC key will cause the _handler_escape() method to be called which then calls the windows Close() method to close the window.
  • Changed the logic for how the new analysis wizard in the GUI is destroyed. This relates to bug #22818, the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The Destroy() method has been added to the Analysis_wizard class to properly close all elements of the wizard. This is now called from the menu_new() method of the Analysis_controller class, which is the target of the menu item and toolbar button. To allow the test suite to use this, the menu_new() method now accepts the destroy boolean argument. The test suite can set this to False and then access the GUI elements after calling the method (however the Destroy() method must be called by the test suite).
  • Resign of how the new analysis wizard is handled in the GUI tests. This relates to bug #22818, the GUI test suite failures in MS Windows - PyAssertionError: C++ assertion "Assert failure". The GUI test base class method new_analysis_wizard() has been created to simplify the process. When a new analysis is desired, this method should be called. It will return the analysis page GUI element for use in the test. The method standardises the execution of the new analysis wizard and sets up the analysis in the GUI. It also properly destroys the wizard to avoid the memory leaking issues such as bug #22818. All GUI tests have been converted to use new_analysis_wizard(). This allows the GUI tests to pass on MS Windows. However there are still significant sources of memory leaks (the USER Objects count) visible in the Windows Task Manager.
  • Fix for the gui.fonts module to allow it to be used outside of the GUI.
  • Updated all of the scripts in devel_scripts/gui/. These have been non-functional since the merger of the relax bieri_gui branch back in January 2011.
  • The gui.misc.bitmap_setup() function can now be used outside of the GUI.
  • Fix for the GUI test base class new_analysis_wizard() method for relaxation dispersion analyses.
  • Modified the pipe_control.pipes.get_bundle() function to operate when no pipe is supplied. In this case, the pipe bundle that the current data pipe belongs to will be returned.
  • Created the Periodic_table.has_element() method for the lib.periodic_table module. This is used to simply check if a given symbol exists as an atom in the periodic table.
  • Added 4 unit tests to the _lib.test_periodic_table module for the Periodic_table.has_element() method.
  • Modified the internal structural object backend for the structure.read_pdb user function. The MolContainer._det_pdb_element() method for handling PDB files with missing element information has been updated to use the Periodic_table.has_element() method to check if the PDB atom name corresponds to any atoms in the periodic table. This allows for far greater support for HETATOMS and all of the metals.
  • Created the Structure.test_load_spins_multi_mol system test. This is to test yet to be implemented functionality of the structure.load_spins user function. This is the loading of spin information similar, but not necessarily identical molecules all loaded into the same structural model. For this, the from_mols argument will be added.
  • Fixes for the Structure.test_load_spins_multi_mol system test. The call to the structure.load_spins user function has also been modified so that all 3 spins are loaded at the same time.
  • Implemented the multiple molecule merging functionality of the structure.load_spins user function. The argument has been added to the user function frontend and a description added for this new functionality. In the backend, the pipe_control.structure.main.load_spins() function will now call the load_spins_multi_mol() function if from_mols is supplied. This alternative function is required to handle missing atoms and differential atom numbering.
  • Modified the N_state_model.test_populations system test to test the grid search code paths. This performs a grid search of one increment after minimisation, then switches to the 'fixed' N-state model and performs a second grid search of one increment. This now tests currently untested code paths in the grid_search() API method behind the minimise.grid_search user function. The test demonstrates a bug in the N-state model which was not uncovered in the test suite.
  • Created the N_state_model.test_CaM_IQ_tensor_fit system test. This is for catching bug #22849, the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. This new test checks code paths unchecked in the rest of the test suite, and is therefore of high value.
  • Modified the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The multiple molecule merging functionality of the structure.load_spins user function now handles missing atomic positions differently. The aim is that the length of the spin container position variable is fixed for all spins to the number of structures, as the N-state model analysis assumes this equal length for all spins. When data is missing, the atomic position for that structure is now set to None. This will require other modifications in relax to support this new design.
  • Modified the interatom.unit_vectors user function backend to handle missing atomic positions. This is to match the structure.load_spins user function change whereby missing atomic positions are now set to the value of None.
  • Fix for the atomic position handling in pipe_control.structure.main.load_spins_multi_mol(). The dimensionality of the position structure returned by the structural object atom_loop() method needed to be reduced.
  • The structure.load_spins user function now stores the number of states in cdp.N. This is to help the specific analyses which handle ensembles of structures. With the introduction of the from_mols argument to the structure.load_spins user function, the number of states is now not equal to the number of structural models, as the states can now come from different structures of the same model. Therefore the user function will now explicitly set cdp.N to the number of states depending on how the spins were loaded.
  • Clean up and speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. All output files are now set to 'devnull' so that the system test no longer creates any files within the relax source directories. And the optimisation settings have been decreased to hugely speed up the system test.
  • Expanded the lib.arg_check.is_float_matrix() function by adding the none_elements argument. This matches a number of the other module functions, and allows for entire rows of the matrix to be None.
  • Lists of lists containing rows of None are now better supported by the lib.xml functions. The object_to_xml() function will now convert the float parts to IEEE-754 byte arrays, and the None parts will be stored as None in the list node. The matching xml_to_object() function has also been modified to read in this new node format. This affects the results.write and state.save user functions (as well as the results.read and state.load user functions).
  • Added spacing after the minimise.grid_search user function setup printouts. This is for better spacing for the next messages from the specific analysis.
  • Speed up of the N_state_model.test_CaM_IQ_tensor_fit system test. This test is however still far too slow.
  • Added printouts to pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data(). These functions now accept the verbosity argument which if greater than 0 will activate printouts of how many RDCs or PCSs have been assembled for each alignment. This will be useful for user feedback as the spin verses interatomic data container selections can be difficult to understand.
  • The verbosity argument for the N-state model optimisation is now propagated for more printouts. The argument for the calculate() and minimise() API methods is now sent into specific_analyses.n_state_model.optimisation.target_fn_setup(), and from there into the pipe_control.pcs.return_pcs_data() and pipe_control.rdc.return_rdc_data() functions. That way the number of RDCs and PCSs used in the N-state model is reported back to the user for better feedback.
  • Updated the N_state_model.test_CaM_IQ_tensor_fit system test so it operates correctly as a GUI test. All user functions are now executed through the special self._execute_uf() method to allow either the prompt interpreter or the GUI to execute the user function.
  • Modified the N_state_model.test_CaM_IQ_tensor_fit system/GUI test for implementing a new feature. The 'spin_selection' argument has been added to the interatom.define user function. This will be used to carry the spin selections over into the interatomic data containers.
  • Implemented the spin_selection Boolean argument for the interatom.define user function. This has been added to the frontend with a description, and to the backend. When set, it allows the spin selections to define the interatomic data container selection.
  • Changed the spin_selection argument default in the interatom.define user function backend. This now defaults to False to allow other parts of relax which call this function to operate as previously. The default for the interatom.define user function is however still True.
  • Modified the Structure.test_load_spins_multi_mol system test for the spin.pos variable changes. The atomic position for an ensemble of structures is now set to None rather than being missing, so the system test has been updated to check for this.
  • The align_tensor.display user function now has more consistent section formatting. The section() and subsection() functions of the lib.text.sectioning module are now being used to standardise these custom printouts with the rest of relax.
  • Modifications to the new N_state_model.test_CaM_IQ_tensor_fit system test. The system test now checks all of the optimised values to make sure the correct values have been found. That will block any future regressions in this N-state model code path. The system test is now also faster. And the pcs.structural_noise user function RMSD value has been set to 0.0 so that the test no longer has a random component affecting the final optimised values.
  • Added printouts for the rdc.calc_q_factors and pcs.calc_q_factors user functions. These are activated by the new verbosity user function argument which defaults to 1. If the value is greater than 0, then the backend will print out all the calculated Q factors.
  • The verbosity argument of the RDC and PCS q_factors() functions now defaults to 1. This causes the Q factors to be printed out at the end of all N-state model optimisations.
  • Created the Structure.test_bug_22860_CoM_after_deletion system test. This is to catch bug #22860, the failure of the structure.com user function after calling structure.delete.
  • Fix for the checks in the new Structure.test_load_spins_multi_mol system test. A spin index was incorrect.
  • Fix for the structure.load_spins user function when the from_mols argument is used. The load_spins_multi_mol() function of the pipe_control.structure.main module was incorrectly handling the atomic position returned by the internal structural object atom_loop() method. This position is a list of lists when multiple models are present. But when only a single model is present, it returns a simple list.
  • Modified the Structure.test_bug_22860_CoM_after_deletion system test to expect a RelaxNoPdbError. This tests that the structure.com user function raises RelaxNoPdbError after deleting all of the structural information from the current data pipe.
  • The mol_name argument is now exposed in the structure.add_atom user function. This has been added as the first argument of the user function to allow new molecules to be created or to allow the atom to be placed into a specific molecule container. The functionality was already implemented in the backend, so it has been exposed by simply adding a new argument definition to the user function.
  • Created the Structure.test_bug_22861_PDB_writing_chainID_fail system test. This is to catch bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete.
  • Small modification of the Structure.test_bug_22861_PDB_writing_chainID_fail system test. File metadata is now being set to demonstrate that the structure.delete user function does not remove this once there is no more data left for the molecule.
  • Small indexing fixes for the dispersion chapter of the relax manual.
  • Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. Another import line was written to the matplotlib script.
  • Speedup and fix for system test Relax_disp.test_dx_map_clustered_create_par_file. The following test was taken out, since this a particular interesting case. There exist a double minimum, where relax has not found the global minimum. This is due to not grid searching for Ra2, but using the minimum value.
  • Removed debugging code from the N_state_model.test_CaM_IQ_tensor_fit system test. This was an accidentally introduced state.save user function used to catch the system test state. It would results in the 'x.bz2' file being dumped in the current directory.
  • Loosened the checks in the Relax_disp.test_baldwin_synthetic_full system test. This is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system.
  • Fix for the Relax_disp.test_cpmg_synthetic_dx_map_points system test for certain systems. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences.
  • Fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test for certain systems. The optimisation precision has been increased, and the value checking precision has been decreased. This change is to allow the test to pass on Python 2.5 and 3.1 on a 32-bit GNU/Linux system. This may be related to 32-bit numpy 1.6.2 verses later numpy versions causing precision differences.
  • Converted all the extern.numdifftools modules using the dos2unix program.
  • Updated the Python 2 to Python 3 migration document to be more current.
  • Small edit of the docs/devel/2to3_checklist document.
  • Expanded the Python 2 to 3 conversion document to list the 2to3 command individually.
  • The ImportErrors in unit tests are now correctly handled by the relax test suite. If an ImportError occurred, this was previously killing the entire test suite.
  • The target_function.relax_fit module unit tests are now skipped if the C module is not compiled.
  • Expanded the Python 2 to 3 conversion document.
  • Small update to the 2to3_checklist document - the print statement conversion has been added.
  • The lib.errors module is now importing lib.compat.pickle for better Python 2 and 3 support. This shifts the compatibility code from lib.errors into lib.compat so that the 2to3 program will not touch the lib.errors module.
  • Better Python 3 compatibility in some test suite shared data profiling scripts. These changes invert the logic, importing the Python 3 builtins module and aliasing xrange() to range(), and passing if an ImporError occurs. The code will now no longer be modified by the 2to3 program.
  • Unicode fixes for the "\u" string in "\usepackage" in the module docstring. This requires escaping as "\\usepackage" to avoid the unicode character '\u'.
  • The lib.check_types module now imports io.IOBase from the lib.compat module. This is to shift more Python 2 vs. 3 compatibility into lib.compat and out of all other modules.
  • Python 3 improvements - changed how the Python 3 absent builtins.unicode() function is handled. The aliased builtins.str() function is now referenced as lib.compat.unicode(). The Python 2 __builtin__.unicode() function is also aliased to lib.compat.unicode(). The GUI using this function now import it from lib.compat.
  • Removed the writable base directory check in the dauvergne_protocol auto-analysis. This check was causing the system test to fail if the user does not have write access to the installed relax directory.
  • Expanded the Mac_framework_build_3way document to include matplotlib.
  • Important bug fix for racing causing the GUI to freeze. This is really only seen in the GUI tests on MS Windows systems, as a user could never be fast enough with the mouse. The GUI interpreter flush() method for ensuring that all user functions in the queue have been cleared now calls wx.Yield() to force all wxPython events to also be flushed. This change will avoid random freezing of the relax test suite.
  • Bug fix for the Mf.test_bug_21615_incomplete_setup_failure GUI test on MS Windows systems. The GUI interpreter flush() method needs to be called between the two structure.load_spins user function calls. Without this, the test will freeze on MS Windows. The freezing behaviour is however not 100% reproducible and is dependent on the Windows version and wxPython version.
  • Shifted a number of wx.NewId() calls to the module namespace to conserve IDs. These are for the menus in the main window and in the spin view window.
  • Shifted the wx.NewId() calls for the spectrum list GUI element to the module namespace. These IDs are used for the pop up menus. The change avoids repetitive calls to wx.NewId() every time a right click occurs, conserving wx IDs so that they are not exhausted when running the test suite or running the GUI for a long time.
  • More shifting of wx.NewId() calls for popup menus to module namespaces to conserve IDs.
  • Converted all of the GUI wizard button IDs to -1, as they are currently unused. This should conserve wx IDs, especially in the test suite.
  • Shifted the main GUI window toolbar button wx IDs to the module namespace. This has no effect apart from better organising the code.
  • Shifted the relax controller window popup menu wx IDs to the module namespace. This is simply to better organise the code to match the other GUI module changes.
  • Menus created by the gui.components.menu.build_menu_item() now default to the wx ID of -1. This is to conserve wx IDs. If the calling code does not provide the ID, there is no need to grab one from the small pool of IDs.
  • Shifted the spin viewer GUI window toolbar button wx IDs to the module namespace. This should conserve wx IDs as the window is created and destroyed, as only 2 IDs will be taken from the small pool for the entire lifetime of the program.
  • Shifted all of the wx.NewId() calls for the new analysis wizard into the module namespace. This will hugely save the number of wx IDs used by the GUI, especially in the test suite. Instead of grabbing 8 IDs from the small pool every time the new analysis wizard is created, only 8 IDs for the lifetime of the program will be used.
  • Another large wx ID saving change. The ID associated with the special accelerator table that allows the ESC button to close relax wizards is now initialised once in the module namespace, and not each time a wizard is created.
  • A small wx ID conserving change - the 'Execute' button in the analysis tabs now uses the ID of -1. A unique ID is not necessary and is unused.
  • The user function class menus no longer have unique wx IDs, as these are unnecessary. This conserves the small pool of unique wx IDs, as the spin viewer window is created and destroyed.
  • Bug fix for the structure.load_spins user function new from_mols argument. This was incorrectly using the pipe_control.pipes.pipe_names() function to obtain its default values in the GUI (although this is not currently uesd). The result was a non-fatal error message on Mac OS X systems of "Python[1065:1d03] *** __NSAutoreleaseNoPool(): Object 0x3a3944c of class NSCFString autoreleased with no pool in place - just leaking".
  • Added a debugging Python version check to the devel_scripts/memory_leak_test_relax_fit.py script. This prevents the script from being executed with a normal Python binary.
  • Created the blacklisted Noe.test_noe_analysis_memory_leaks GUI test. This long test can be manually run to help chase down memory leaks. This can be monitored using the MS Windows task manager, once the 'USER Objects' column is shown. If the USER Objects count reaches 10,000 in Windows, then no more GUI elements can be created and the user will see errors.
  • Added a printout to the Noe.test_noe_analysis_memory_leaks GUI test to help with debugging.
  • Improved debugging printouts for the Noe.test_noe_analysis_memory_leaks GUI test.
  • Small fix for the GUI analysis deletion method to prevent racing in the GUI tests.
  • Redesigned how wizards are destroyed in the GUI. The relax wizard Destroy() method is now overridden. This allows the buttons in the wizard to be properly destroyed, as well as all wizard pages. This should remove a lot of GUI memory leaks.
  • Created the General.test_new_analysis_wizard_memory_leak blacklisted GUI test. This will be used to check for memory leaks in the new analysis wizard.
  • Removed an unused dictionary from the GUI wizard object.
  • Added a wx.Yield() before destroying the new analysis wizard via menu_new(). This is to avoid racing which can be triggered in the test suite.
  • Bugfixes:
  • Fix for the latex2html tags in the model-free chapter of the relax manual. This bug may affect the compilation of both the PDF and HTML version (http://www.nmr-relax.com/manual/) of the manual.
  • Formatting improvements for the user function chapter of the HTML manual. This will hopefully fix the horrible formatting whereby all text is wrapped in the HTML tags text.
  • Big bug fix for the text size formatting of the HTML manual. The previous fix for the user function chapter of the HTML manual (http://www.nmr-relax.com/manual/Alphabetical_listing_user_functions.html) did not fix the problem. The issue was with the {exampleenv} defined using a \newenvironment command in the preamble. The command \footnotesize was bing used in the start, but nothing was changing the font size at the end. In LaTeX, the ending of the environment appears to reset the font size, whereas in latex2html it does not. Therefore all text after this environment is prepended by in the HTML manual and this keeps adding to the text after each new exampleenv environment.
  • Fix for the poorly written User_functions.test_structure_add_atom GUI test. This fixes one part of 2 of the bug #22772, the modelfree4 binary issue and the User_functions GUI tests with wxPython 2.9 failures of the test suite. The problem was that a list element was being set in the GUI test, but that element did not exist yet. Somehow this worked in wxPython 2.8. But the bad code failed on wxPython 2.9.
  • Updated the Palmer.test_palmer_omp system test for the 64-bit Linux Modelfree 4.20 GCC binary file. This fixes the second part and last part of the bug #22772, the modelfree4 binary issue and the User_functions GUI tests with wxPython 2.9 failures of the test suite. The problem is that the 64-bit GNU/Linux GCC compiled binary of Modelfree 4.20 produces different results as previous versions. These are now caught by the system test and correctly checked.
  • Removal the use of OrderedDict(). OrderedDict is first available in python 2.7, and is not essential functionality. The functionality is replaced with looping over a list of dictionary keys instead, which is picked up under analysis. Bug #22798: Failure of relax to start due to an OrderedDict ImportError on Python 2.6 and earlier.
  • Fix for the find next bug in the relax controller window. This is bug #22815, the failure of find next using F3 (or Ctrl-G on Mac OS X) in the relax controller window if search text has already been set. The fix was simple, as the required flags are in the self.find_data class object (an instance of wx.FindReplaceData).
  • Fix for find dialog in the relax controller window. This is for bug #22816, the find functionality of the relax controller window does not find text when using wxPython >= 2.9. The find wxPython events are now bound to the find dialog rather than the relax controller window LogCtrl element for displaying the relax messages. This works on all wxPython versions.
  • Bug fix for the structure.align user function for when no data pipes are supplied.
  • Bug fix for the N-state model grid search when only alignment tensor parameters are optimised. The algorithm for splitting up the grid search to optimise each tensor separately, hence massively collapsing the dimensionality of the problem, was being performed incorrectly. The grid_search() API method inc, lower, and upper arguments are lists of lists, but were only being treated as lists.
  • Final fix for bug #22849, the failure of the N-state model analysis when optimising only alignment tensors using RDCs and/or PCSs. The alignment tensor is no longer initialised to zero values. This is to allow the skip_preset argument for the minimise.grid_search user function to be operational for the N-state model, a feature introduced with the zooming grid search. The solution was to check for the uninitialised tensor in the minimise_setup_fixed_tensors() method of the specific_analyses.n_state_model.optimisation module.
  • Bug fix for the lib.arg_check.is_float_matrix() function. The check for a numpy.ndarray data structure type was incorrect so that lists of numpy arrays were failing in this function. Rank-2 arrays were not affected.
  • Fix for the structure.com user function. This fixes bug #22860, the failure of the structure.com user function after calling structure.delete. The number of models in cdp.structure is now counted and if set to zero, RelaxNoPdbError will be raised.
  • The structure.write_pdb user function can now handle empty molecules. This fixes bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. To handle this consistently, the internal structural object ModelContainer.mol_loop() generator method has been created. This loops over the molecules, yielding those that are not empty. The MolContainer.is_empty() method has been fixed by not checking for the molecule name, as that remains after the structure.delete user function call while all other information has been removed. And finally the write_pdb() structural object method has been modified to use the mol_loop() method rather than performing the loop itself.
  • Fix for the structure.delete user function for molecule metadata once no more data exists. This relates to bug #22861, the chain IDs in the structure.write_pdb user function PDB files are incorrect after calling structure.delete. The metadata, when it exists, is now deleted for the molecule once no more data is present.
  • Fix for system test Relax_disp.test_bug_atul_srivastava. The call to the expected RelaxError needed to be performed differently for erlier python versions that 2.7.
  • Fix for bug #22937, the failure of the Relax_disp.test_estimate_r2eff_err_auto system test on Python 2.5. The test_suite/shared_data/dispersion/Kjaergaard_et_al_2013/1_setup_r1rho_GUI.py simply required a newline character at the end of the file so that it can be executed in Python 2.5.
  • Fix for bug #22938, the failure of the test suite in the relax GUI. The problem was that the status.skip_blacklisted_tests variable did not exist - it was only initialised if relax is started in test suite mode. Now the value is always set from within the status module and defaults to True.
  • Python 3 fixes for the relax codebase. These changes were made using the command: 2to3 -j 4 -w -f buffer -f idioms -f set_literal -f ws_comma -x except -x import -x imports -x long -x numliterals -x xrange .
  • Python 3 fixes throughout relax, as identified by the 2to3 script. The command used was: 2to3 -j 4 -w -f except -f import -f imports -f long -f numliterals -f xrange .
  • Python 3 fixes - eliminated all usage of the dictionary iteritems() calls as this no longer exists.
  • Python 3 fixes using 2to3 for the extern.numdifftools package (mainly spacing fixes). The command used was: 2to3 -j 4 -w -f buffer -f idioms -f set_literal -f ws_comma -x except -x import -x imports -x long -x numliterals -x xrange .
  • Python 3 fixes using 2to3 for the extern.numdifftools package. The command used was: 2to3 -j 4 -w -f except -f import -f imports -f long -f numliterals -f xrange .
  • Python 3 fixes for all print statements in the extern.numdifftools package. The print statements have been manually converted into print() functions.
  • Python 3 fixes via 2to3 - elimination of all map and lambda usage in relax. The command used was: 2to3 -j 4 -w -f map .
  • Python 3 fixes via 2to3 - replacement of all `x` with repr(x). The command used was: 2to3 -j 4 -w -f repr .
  • Manual Python 3 fixes for the dict.key() function which returns a list or iterator in Python 2 or 3. This involves a number of changes. The biggest is the conversion of the "x in y.keys()" statements to "x in y". For code which requires a list of keys, the function calls "list(y.keys())" or preferably "sorted(y.keys())" are used throughout (sorted() ensures that the list will be of the same order on all operating systems and Python implementations). A number of "x in list(y.keys())" statements were simplified to "x in y", some list() calls changed to sorted(), and some unnecessary list() calls were removed.
  • Python 3 fixes via 2to3 - elimination of all apply() calls. This only affects the GUI which cannot run in Python 3 yet as wxPython is not Python 3 compatible yet. The command used was: 2to3 -j 4 -w -f apply .
  • Python 3 fixes via 2to3 - proper handling of the dict.items() and dict.values() functions. These are now all wrapped in list() function calls to ensure that the Python 3 iterators are converted to list objects before they are accessed. The command used was: 2to3 -j 4 -w -f dict .
  • Python 3 fixes via 2to3 - the execfile() function does not exist in Python 3. The command used was: 2to3 -j 4 -w -f execfile .
  • Python 3 fixes via 2to3 - the filter() function in Python 3 now returns an iterator. The command used was: 2to3 -j 4 -w -f filter .

New in version 3.3.1 (October 10th, 2014)

  • Changes:
  • Epydoc docstring fix for the dep_check.version_comparison() function.
  • Removed ZZ and HD exchange from the dispersion chapter of the relax manual. These would probably require completely new analysis types added to relax to analyse such data.
  • Updated the 'Announcement' section of the release checklist document. This now includes details about initially composing the message using the relax wiki, and then how that text and the CHANGES file are used for the email announcement and the Gna! news item.
  • Small changes for the Gna! news item in the release checklist document.
  • Modified the announcement section of the release checklist document. Text about removing wiki markup has been added.
  • More expansion of the release checklist document. Added text about creating internal and external links for the wiki release notes.
  • Modified system test Relax_disp.test_show_apod_extract that test output from showApod. The output can be different according to NMRPipe version. The 'Noise Std Dev' is though the same.
  • Fix for comments to dependency check of showApod.
  • Fix for raising error when calling showApod, and subprocess module not available.
  • Fix for the dependency check for showApod in system tests.
  • Further extended the protocol for repeated dispersion analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Extended the system test for the protocol for repeated dispersion analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added a relaxation dispersion model profiling log file for relax version 3.3.0 vs. 3.2.3. This is the output from the dispersion model profiling master script. These numbers will be used for the relax 3.3.0 release notes.
  • Fixes for the relax 3.3.0 vs. 3.2.3 dispersion model profiling log file. The numeric model numbers were incorrectly scaled and a factor of 10 too high.
  • Fixes for the scaling factors in the dispersion model super profiling script.
  • Editing of the relax 3.3.0 features section of the CHANGES file. This will be used for the release notes.
  • Added more test data for the repeated analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Updated the Baldwin 2014 reference in the relax manual. The pybliographic software was used to format this BibTeX entry. This was updated as volume and page number information is now available.
  • Updated the Morin et al, 2014 paper (the relax relaxation dispersion paper) reference in the manual. The paper now has volume and page information.
  • Added some more user function ranamings to the translation table. These were identified while preparing the release notes on the wiki (http://wiki.nmr-relax.com/Category:Release_Notes, http://wiki.nmr-relax.com/Release_notes).
  • Stored a frequency dependent dictionary with spectrum IDs and repeated PMG frequencies in setup pipe. This information will progress out through children pipes. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Further extended methods in the class for repeated analysis of dispersion data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Updated the release checklist document, including adding a section about cross-linking. The cross-linking is important for search engine indexing.
  • Created a simple script for printing out the names of all user functions.
  • Added listings of all user functions from relax version 2.0.0 all the way to relax 3.3.0. This will be used to look at how the user function names have changed with time.
  • Added a script and log file for comparing relax user function differences between versions.
  • Created a document for relax users which follows the changes to the user function names.
  • For the spin.display user function, added the print out of spin ID and status for selection. This is to help with showing the spin ID string for selection, and the current status of selection. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • To the back-end of display pipes, added functionality to sort the pipe names before printing. Also added the return of the list of pipes, with its associated information about pipe type, and pipe_bundle. This is to help with getting a better overview for multiple pipes in data store. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Parsed the force flag from front end of value.set to back end. Bug #22598. Back end of value.set does not respect force=False flag.
  • Broke optimisation function into smaller functions. This is to help selecting spins, do particular grid search and minimise. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Modified system test to follow the new functions in the auto analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Shifted the user function listing script into the test suite directory where the results are.
  • Created a script for printing out relax 1.3 user functions.
  • Stripped out all of the relax intro and script printouts from the user function listing files. This allows the diff.py script to be simplified.
  • Updated the relax 1.3 user function printout script and added many printouts. The printouts are for relax versions 1.3.5 to 1.3.16. The earlier relax versions used the relax 1.2 user function setup.
  • Created a script for printing out all user functions for relax 1.2 versions. This also includes the relax 1.3.0 to relax 1.3.4 versions.
  • Added the relax 1.3.0 to relax 1.3.4 user function printouts.
  • Changed the behaviour of the script for showing user function difference between relax versions. The relax versions are now reversed so the oldest version is at the bottom of the difference printout.
  • Added the relax 1.0.1 to relax 1.2.15 user function printouts. The diff.log file has also been updated with all of these versions.
  • Updated the user_function_changes.txt document. This now lists all changes in the user function naming from relax version 1.0.1 all the way to relax 3.3.0.
  • Added all remaining user function ranamings since relax 2.0.0 to the translation table. These were taken directly from the docs/user_function_changes.txt document.
  • Added all user function ranamings since relax 1.3.1 to the translation table. These were taken directly from the docs/user_function_changes.txt document. Earlier relax versions are far too different, so this will be the earliest relax version for this translation table. The relax 1.2 and earlier (and 1.3.0) versions used the run argument throughout and the scripting was so different, that telling the user how to upgrade to new user functions is pointless. And the release date of relax 1.2.15, the last of these old designs was in November 2008.
  • Changed the order of the two relax versions being compared for user function changes. This is in the diff.py script and log file and the user_function_changes.txt document.
  • Changed the organisation of the files in the docs/ directory. A new docs/devel directory has been created and the 2to3_checklist, Mac_framework_build_3way, package_layout, and prompt_screenshot.txt documents shifted into it. This is to hide or abstract away the development documents so that relax users do not see them when looking into docs/. This should make the directory less intimidating.
  • Shifted the Release_Checklist document into docs/devel/ to hide it from users.
  • Correction for the noe.read to spectrum.read_intensities user function change. This is for the translation table used to catch old user function calls.
  • Initial try to implement plotting in the repeated auto analysis protocol. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Small improvement of the matplotlib plotting of data in the repeated analysis protocol. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Fix for calling correct folder with test intensities. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • For the class of repeated analysis, implemented method to collect peak intensity, and function to plot the correlation. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added system test Relax_disp.test_repeat_cpmg to be skipped, if no matplotlib module exists. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added the Gimp XCF file for the logo of the relax wiki.
  • Added system test Relax_fit.test_curve_fitting_height_estimate_error() for the manual and automated analysis of exponential fit. This is to prepare for new methods in the auto analysis protocol.
  • In the auto analysis of exponential fitting, changed the minimisation method from simplex to Newton, to speed-up the fitting. This is for master Monte Carlo simulations.
  • In the system test Relax_fit.test_curve_fitting_height_estimate_error(), moved the auto-detection of replicated spectra into the manual method. This is to prepare for auto-mated detection of replicates.
  • Implemented a method to automatically find duplicates of spectrum in exponential fit. This is to ease the user intervention for error analysis, if this has been forgotten.
  • Implemented the writing out of a "grace2images.py" script file, when performing auto analysis of exponential fits.
  • Created the Structure.test_delete_model system test. This is in preparation for extending the structure.delete user function to be able to delete individual structural models. The test will only pass once this functionality is in place.
  • Expanded the wiki instructions in the release checklist document. This includes a number of steps for significantly improving the release notes: External links to the Gna! trackers with full descriptions, external links to the HTML user manual for all user functions, internal links to release notes of other relax versions, internal links to wiki pages for all models from all theories, and HTML formatting of all symbols/parameters/etc.
  • Introduction of the model argument to the structure.delete user function. This argument is passed all the way into the internal structural object, but is not used yet.
  • The model argument in the structure.delete user function is now operational. In the internal object, it has two functions. When the atom_id argument is none, then new ModelList.delete_model() function is being called to remove the entire model from the list of structural models. When the atom_id argument is supplied, then only the corresponding atoms in the given model will be deleted.
  • Expanded the checking in the Structure.test_delete_model system test. Now a number of structural model loading and deletion scenarios are tested.
  • Implemented back-end function to estimate Rx and I0 errors from Jacobian matrix. This is to prepare for user function in relax_fit, to estimate errors.
  • Implemented user function relax_fit.rx_err_estimate in relax_fit to estimate Rx and I0 errors from the Jacobian Co-variance matrix.
  • Extended system test Relax_fit.test_curve_fitting_height_estimate_error() to test the error estimation method from the Co-variance matrix. The results seems very similar, if increasing to 2000 Monte Carlo simulations.
  • Renamed the pipe_control.monte_carlo module to pipe_control.error_analysis. This is in preparation for the module to handle all error analysis techniques: Monte Carlo simulations, covariance matrix, Jackknife simulations, Bootstrapping (which is currently via the Monte Carlo functions), etc. All current functions are now prepended with 'monte_carlo_*()'.
  • Fix for the old relax 1.2 model-free results file reading. This is due to the pipe_control.monte_carlo to pipe_control.error_analysis module renaming.
  • Implemented the pipe_control.error_analysis.covariance_matrix() function. This follows from http://thread.gmane.org/gmane.science.nmr.relax.scm/23526/focus=7096. It will be used by a new error_analysis.covariance_matrix user function. And it calls the specific API methods model_loop(), covariance_matrix(), and set_error() and the relax library lib.statistics.multifit_covar() function do to most of the work.
  • Modified the Relax_fit.test_curve_fitting_height_estimate_error system test. The call to relax_fit.rx_err_estimate has been replaced by the yet-to-be implemented error_analysis.covariance_matrix user function.
  • Creation of the error_analysis.covariance_matrix user function. This is simply a code rearrangement. The relax_fit user function module was duplicated and relax_fit.rx_err_estimate renamed to error_analysis.covariance_matrix. References to the specific analysis have been removed.
  • Created the specific analysis base API method covariance_matrix(). This defines the arguments required and what is returned by the method. It raises the RelaxImplementError for all analyses which do not implement this method.
  • Modified pipe_control.error_analysis.covariance_matrix(). The call to the API covariance_matrix() method now has the model_info argument passed into it. For the relaxation curve-fitting, this allows the loop over spin systems to be skipped.
  • Shifted the contents of the specific_analysis.relax_fit.estimate_rx_err module into the API. The estimate_rx_err() function is now the covariance_matrix() method of the specific API. The code for calculating the covariance matrix and errors are now in the function pipe_control.error_analysis.covariance_matrix(), so this has been removed. And the error setting is performed by the set_errors() API method, so that code has been deleted as well.
  • Removed the specific_analyses.relax_fit.estimate_rx_err module import. The module has been merged into the specific API module.
  • Fix for the pipe_control.error_analysis.covariance_matrix() function. The set_error() API method is parameter specific, so a loop over the parameters using the get_param_names() API method has been added.
  • Removed the estimate_rx_err module from the specific_analyses.relax_fit.__all__ list. This module was deleted after merger into the api module.
  • Improved the plotting of correlation plot for intensity. Now the intensity to error is plotted, which is the correct measure of this data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Implemented a correlation plot for Reff2 values to be plotted for different pipes. This has the Reff2/σReff2 plotted, which is the best way to represent this data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Further improved the plotting of data in repeated analysis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added the Relax_disp.test_show_apod_rmsd_dir_to_files system test to the blacklist. This is if the showApod program is not installed on the machine and allows the test suite to pass.
  • Extended the printout for the skipped tests in the test suite. As tests using the NMRPipe showApod software are skipped and listed in this table, the text now includes 'software' in the list.
  • Shifted the checks for the Dasha and Modelfree4 software into the system test __init__() method. This is to bring this into the same design as the relaxation dispersion tests which require the NMRPipe showApod software. Now the test suite will list either Dasha or Modelfree4 in the skipped test table if they are not installed.
  • Adding another statistic method to plot for multi-data sets. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • More adding of matplotlib snippets for plotting intermediate data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Changing the range of plotting for statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • More changes to plotting for statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Fix for axis limits when plotting stats. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Fix for globing, to prevent incidentally taking wrong intensity file. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Correction to figure limits. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Implemented writing out of statistics to file. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Adding writing out of PNG files from matplotlib, when looking at statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Another math domain checking, if ref intensity is set to 0.0, then points are skipped, rather than raising an Error. This can happen for extremely bad dispersion data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Trying implementing flexibility, when data expected data is missing. This can be due failing of processing data, where a whole run of data is randomly skipped. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Better check for math domain error in intensity proportionality. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Removal of initialised of dictionary, before data existence have been checked. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Small fix for correct check of missing of data. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Imported the Numdifftools 0.6.0 package into the relax source tree. This package is extremely useful for testing the implementation of gradients, Hessians, and Jacobians for all relax target functions. The numerical values from numdifftools can be compared to the directly calculated values. And for analysis types where the partial derivatives with respect to each model parameter are too complicated to calculated, or the derivatives are very complicated and hence slow, numdifftools can be used to provide a numerical estimate for direct use in the optimisation. The Numdifftools package is from https://pypi.python.org/pypi/Numdifftools and https://code.google.com/p/numdifftools/. The current version 0.6.0 has been placed into extern/numdifftools. This is only the numdifftools package within the official distribution files and the Python package setup.py file and associated files and directories have not been included. The package uses the New BSD licence (the revised licence with no advertising clause) which is compatible with the GPL v3 licence.
  • Reordered functions in repeated analysis protocol. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added more check of methods to the system test Relax_disp.test_repeat_cpmg(). This actually shows, that user function relax_disp.r20_from_min_r2eff maybe is broken. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Fix for the testing of method is finished when called. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Turned on minimisation in system test Relax_disp.test_repeat_cpmg(). Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • The lib.spectrum.nmrpipe module has been made independent of the relax source code. This was discussed at http://thread.gmane.org/gmane.science.nmr.relax.scm/23357/focus=7103. The change allows the software verification tests pass. The dep_check module cannot be used in the relax lib package. Only modules from within lib are allowed to be imported into modules of lib. The fix now allows the full test suite to pass and hence new relax releases are once again possible.
  • Created a document which explains how missing copyrights can be found.
  • Even more improvements to the shell command for finding missing copyrights.
  • Updated the copyright notice for 2014 for all files changed by Edward d'Auvergne. These were identified using the command in the find_missing_copyrights document.
  • Added numdifftools to the extern package __all__ list.
  • Updated the find_missing_copyrights document. The matching is now more precise and skips all svnmerge operations.
  • Added the 2014 copyright notice for Troels Linnet to many relax source files. These were identified as being edited by Troels using the command listed in the find_missing_copyrights document. The changes include adding "Copyright 2014 Troels E. Linnet" to many files not containing Troels' copyright notice, and extending the 2013 copyright to 2014.
  • Implemented correlation plot of minimisation values. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Changed the missing package/module/software table in the test suite. This is to allow all names to fit and to update the column titles for software packages.
  • Decreased the accuracy of a check in the Relax_disp.test_estimate_r2eff_err_auto system test. This is to allow the test to pass on my Windows 7 VM.
  • Added Troels E. Linnet to the COMMITTERS file, which has not been updated in almost 3 years.
  • Created the Structure.test_get_model system test. This demonstrates that the internal structural object get_model() method is not working as it should.
  • Added a few more checks to the Structure.test_get_model system test.
  • Created the Structure.test_collapse_ensemble system test. This is used to test a planned feature of the internal structural object. The collapse_ensemble() method will be created to remove all but one model in the structural ensemble.
  • Modified the Structure.test_collapse_ensemble system test to check the initial values. This is for sanity reasons as the test coverage of the structure.add_atom user function is poor.
  • Implemented the internal structural object collapse_ensemble() method. This allows the Structure.test_collapse_ensemble system test to pass.
  • Created a basic text based progress meter in the new lib.text.progress module. This is taken from the script test_suite/shared_data/frame_order/cam/generate_base.py.
  • Modifications to the User_functions.test_structure_add_atom GUI test. As lists of lists are now accepted by the structure.add_atom user function, the operation in the GUI is now significantly different. Therefore many checks have been removed from the GUI test.
  • Updated the minimum minfx dependency version number from 1.0.9 to 1.0.11 in the dep_check module. This newest version handles infinite target function values preventing optimisation from continuing forever. The 1.0.10 version is also useful as there is full support for gradients and Hessians in the log-barrier constraint algorithm.
  • Shifted the specific_analyses.relax_disp.variables module into lib.dispersion. This is both to minimise circular dependencies, as previously the specific_analyses.relax_disp modules import from target_functions.relax_disp and vice-versa, and to allow the relax library functions to have access to these variables. This follows from a similar change to the frame order analysis in the frame_order_cleanup branch.
  • Dependency fix for the auto_analyses.relax_disp_repeat_cpmg module. This was causing relax to fail. SciPy is an optional dependence for relax, but this module caused relax to not start if scipy was not installed. This was detected by testing relax with PyPy.
  • Implemented writing out of particular correlation plots to file. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Created a special internal structural object selection object. This will be used for massively speeding up the internal structural object. The use of the lib.selection module by the internal structural object is currently very slow as a huge number of calls to re.search() are required. The idea is to avoid this by using lib.selection once to populate this new selection object, and then reusing this object to loop over molecules and atoms.
  • Added the selection() method to the internal structural object. This parses the atom ID string using the lib.selection module, loops over the molecules and atoms, performs matches using re.search() via lib.selection, and populates and returns the new Internal_selection object. This can be used to pre-process the atom ID string to save huge amounts of time.
  • The internal structural object validate_models() method now accepts the verbosity argument. This is used to silence printouts.
  • Fixes for the new structural object Internal_selection object. The atom indices are not stored via the molecule index.
  • Converted the rotate() and translate() structural object methods to use the new selection object. The atom_id arguments have been replaced with selection arguments. Therefore all parts of relax which call these methods must first call selection() to obtain the Internal_selection instance.
  • Created the structural object Internal_selection.mol_loop() method. This is to simply quickly loop over all molecule indices of the selection object.
  • Converted all structural object methods to use the selection object rather that atom ID strings. This should have a significant impact on the speed of certain operations within relax. The most obvious effect will be a huge speed up of the interatom.define user function. There should be speed ups with a number of other user functions relating to structural information. All parts of relax have been updated for the change.
  • Implemented the sampling sparseness instead of NI on the graph axis. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Massive speed up of the internal structural object add_model() method. This speeds up the structure.add_model user function, as well as many internal relax operations on the structural object. Instead of using the copy.deepcopy() function to duplicate an already existing structural model, now new molecule container objects are created and then the individual elements of the original molecule container data lists are copied one by one. This avoids copying a lot of internal Python junk and hence the copying operation is now orders of magnitude faster.
  • Created the new --no-skip relax command line option. This is a debugging option specifically designed for relax developers. It allows all blacklisted tests to be executed, i.e. all blacklists are ignored. These tests would normally be skipped, however this option enables them.
  • Fix for the test suite summary printout function for the new --no-skip option. The relax status object was clashing with a variable of the same name.
  • Reactivated the Relax_disp.test_m61b_data_to_m61b system test, but blacklisted it. This will allow the test to be executed if the --no-skip command line option is used.
  • Created the Bmrb.test_bug_22703_display_empty system and GUI test. This system test catches bug #22703, the failure of the bmrb.display user function with an AttributeError when no data is present. It is simultaneously a system and GUI test, as the GUI test class inherits directly from the system test class.
  • Created the pipe_control.spectrometer.check_setup() function. This follows the design on the wiki page http://wiki.nmr-relax.com/Relax_source_design. This is for checking if spectrometer information has been set up.
  • Created the RelaxNoFrqWarning warning class for warning that no spectrometer information is present.
  • Renamed the pipe_control.spectrometer.check_setup() function to check_spectrometer_setup(). This is so it can be used without confusion outside of the module.
  • Fix for a broken elif block in the new pipe_control.spectrometer.check_spectrometer_setup() function.
  • The model-free bmrb_write() API method now checks for spectrometer information. This is via a call to thepipe_control.spectrometer.check_spectrometer_setup() function.
  • Modified the Bmrb.test_bug_22703_display_empty system/GUI test to catch the RelaxNoFrqError.
  • Created a special Check class based on the strategy design pattern. This is in the new lib.checks module. The class will be used to simplify and unify all of the check_*() functions in the pipe_control and specific_analyses packages.
  • Converted the pipe_control.spectrometer.check_*() functions to the strategy design pattern. These are now passed into the lib.checks.Check object, and the original functions are now instances of this class.
  • Alphabetical ordering of all functions in the pipe_control.pipes module.
  • Changed the design of the Check object in lib.checks. The design of the checking function to call has been modified - it should now return either None if the check passes or an instantiated RelaxError object if not. This is then used to determine if the __call__() method should return True (when None is received). Otherwise if escalate is set to 1, the text from the RelaxError object is sent into a RelaxWarning and False is returned. And if escalate is set to 2, then the error object is simply raised.
  • Updated the pipe_control.spectrometer.check_*_func() functions to use the new design.
  • Implemented the writing out of parameter values between comparison of NI level. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Fixes for the lib.checks.Check object. The __call__() method keyword arguments **kargs needs to be processed inside the method to strip out the escalate argument.
  • The default value of the escalate argument of the Check.__call__() method is now 2. This will cause the calls to the check_*() function/objects to default to raising RelaxErrors.
  • Changed the behaviour of the lib.checks.Check object again. This time the registered function is stored rather than converted into a class instance method. That way the check_*() function-like objects do not need to accept the unused 'self' argument.
  • The data pipe testing function has been converted to the strategy design pattern of the Check object. The function pipe_control.pipes.test() has also been renamed to check_pipe().
  • Created the Bmrb.test_bug_22704_corrupted_state_file system test. This is to catch bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions.
  • Implemented getting the statistics for parameters and comparing to init NI. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Implemented writing and plotting of statistics for individual and clustered fitting, comparing to full NI. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added checks to the Bmrb.test_bug_22704_corrupted_state_file system test. This is to see if the cdp.exp_info data structure has been correctly restored from the save file.
  • Uncommented some checks in the Bmrb.test_bug_22704_corrupted_state_file system test.
  • For relaxation dispersion, modified that the grid search and linear constraints for parameter kAB is between 0-100. The parameter is only used in the TSMFK01 model. The kAB parameter is only for very slow forward exchange rate. The expected values should according to the reference paper: Tollinger, M., Skrynnikov, N. R., Mulder, F. A. A., Forman-Kay, J. D., and Kay, L. E. (2001). Slow dynamics in folded and unfolded states of an sh3 domain. J. Am. Chem. Soc., 123(46), 11341-11352. (10.1021/ja011300z) The paper concerns values of kAB in the region 0.1 to 5.0. If the exchange rate is any higher value of this, then another model should be used for the analysis.
  • Set the default insignificance value to 0.0 instead of 1.0. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Modified the grid search limits for parameter kAB to be between 0.1 and 20.0 rad.s-1. This is for the TSMFK01 model, where values much above 10/20 is not expected.
  • Implemented counting of outliers for statistics. This is to get a better feeling why some statistics are very much different between NI. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Created the Structure.test_mean system test. This is to test the functionality of a planned new feature, the structure.mean user function. This is an analysis aid that will calculate the mean structure from all loaded models.
  • Implemented the structure.mean user function frontend. The backend is currently just a stub function.
  • Fixes and simplifications for the pipe_control.pipes.check_pipe() checking object. One of the RelaxError classes were not initialised and the docstring was incorrect.
  • Created the pipe_control.structure.main.check_structure() checking object. This will be used for providing much more detailed feedback for when structural information is missing.
  • Converted all of the pipe_control.structure.main functions to use the check_structure() object. This standardises and improves all of the checks.
  • Some fixes and additional checks for the Structure.test_mean system test.
  • Implemented the backend of the structure.mean user function. This primarily occurs within the internal structural object in the new mean() method. The pipe_control.structure.main.mean() function simply checks if the current data pipe is correctly set up and then calls the structural object mean() method.
  • Created the Structure.test_align system test. This will be used to test the yet to be implemented structure.align user function. This user function will be similar to the structure.superimpose user function but will be designed so that structures with different primary and atomic sequences can be superimposed.
  • Created the frontend of the structure.align user function. This is almost the same as that of the structure.superimpose user function except that the pipes argument has been added and the titles and description changed to indicate the differences.
  • Registered the new user function argument type 'int_list_of_lists' in the prompt UI. This is to allow for lists of lists of integers, as used for the model argument in the new structure.align user function.
  • Modified the lib.arg_check.is_int_list() function to accept the list_of_lists Boolean argument. This updates the function to have the same functionality as is_str_list(), allows for lists of lists of int to be checked.
  • Extended the Structure.test_align system test to throughly check the structural data. This includes changing the structure.align user function call to use 'fit to first' and carefully checking the new atomic coordinates.
  • Modified the Structure.test_align system test so that translations and rotations match the algorithm. This allows the output of the structure.align user function to be checked to see if the rotation matrix and translation vector found match that used to shift the original structures.
  • Implemented the structure.align user function backend. This is similar to the structure.superimpose user function, however the coordinate data structure only contains atoms which are in common to all structures.
  • The pipe_control.structure.main functions translate() and rotate() now accept the pipe_name argument. This is used to translate and rotate structures in different data pipes, as required by the structure.align user function.
  • The pipe_control.structure.main.check_structure() checking object now accepts the pipe_name argument. This allows structural data to be checked for in different data pipes without having to switch to them.
  • Modified the Structure.test_align system test to call the structure.write_pdb user function. This sets the file name to sys.stdout so that the original structure and the final aligned structures are output to STDOUT for debugging purposes.
  • Created the Structure.test_delete_atom system test. This is used to test the deletion of a single atom using the structure.delete user function.
  • Expanded the Structure.test_delete_atom system test. This is to show that the structure.write_pdb user function fails after a call to the structure.delete user function to delete individual atoms.
  • Fix for the new structure.align user function. The translation and rotation of the structures at the end to the aligned positions was being incorrectly performed.
  • Loosened some checks in the Structure.test_align system test to allow it to pass. Some self.assertEqual() checks for the atomic coordinates have been replaced by self.assertAlmostEqual() to allow for small machine precision differences.
  • Modified the lib.arg_check.is_str_or_inst() function to handle cStringIO objects. This allows sys.stdout to be used as a file object in the relax test suite.
  • Modified the lib.arg_check.is_str_or_inst() function to work with Python 3. Instead of checking for cStringIO.OutputType, which does not exist in Python 3, the argument is simply checked to see if it has a write() method.
  • Print out of the number of all Reff2 points, if they are different between analysis. This can become an issue if a single intensity point has slipped into noise, due to low quality of spectrum reconstruction. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Implemented statistics for Reff2 values. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added data checks and printouts to the structure.align user function. The data checks are to prevent the user from attempting an alignment with differently named molecules, as this will not work.
  • Implemented writing out intensity and error correlations plot. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Implemented writing out of intensity statistics. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Expanded the structure.com user function to accept the atom_id argument. This allows the centre of mass (CoM) calculation to be restricted to a certain subset of atoms. The backend already had support for this feature, but now it is exposed in the frontend. The user function docstring has been slightly modified as well.
  • Skipping of intensity calculation, if the intensity pipe does not exists. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Added example CPMG data, which could possibly be sent for BMRB submission. The data is un-published CPMG data, related to the paper: Webb H, Tynan-Connolly BM, Lee GM, Farrell D, O'Meara F, Soendergaard CR, Teilum K, Hewage C, McIntosh LP, Nielsen JE (2011). Remeasuring HEWL pK(a) values by NMR spectroscopy: methods, analysis, accuracy, and implications for theoretical pK(a) calculations. Proteins: Struct., Funct., Bioinf. 79(3), 685-702, DOI 10.1002/prot.22886. Task #7858: Make it possible to submit CPMG experiments for BMRB.
  • Added system test Relax_disp.test_bmrb_sub_cpmg() to try calling the bmrb functions in relax. Task #7858: Make it possible to submit CPMG experiments for BMRB.
  • Implemented the initial part of the API, to collect data for BMRB submission. Task #7858: Make it possible to submit CPMG experiments for BMRB.
  • Inserted a "RelaxImplementError" when trying to call bmrb_write from a relaxation dispersion analysis. To implement the function, it would require a re-write of the relax_data bmrb_write(star) function, and proper handling of cdp.ri_ids. It was also not readily possible to find examples of submitted CPMG data in the BMRB database. This makes it hard to develop, and even ensure that BMRB would accept the format. Task #7858: Make it possible to submit CPMG experiments for BMRB.
  • Removed the system test Relax_disp.test_bmrb_sub_cpmg() to be tested in the test-suite. This test will not be implemented, as it requires a large re-write of data structures. Task #7858: Make it possible to submit CPMG experiments for BMRB.
  • Removed the showing of Matplotlib figures in the test suite. Task #7826: Write a Python class for the repeated analysis of dispersion data.
  • Implemented system test Relax_disp.test_dx_map_clustered to catch the missing creation of a point file. Bug #22753: dx.map does not work when only 1 point is used.
  • Inserted a check in system test Relax_disp.test_dx_map_clustered, that a call to minimise.calculate should be the same as the file stored with the clustered χ2 value. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Made initial preparation to loop over clustered spins and IDs for the minimise.calculate user function call. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Implemented looping over spin-clusters when issuing a minimise.calculate. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Made back_calc_r2eff() in optimisation module use the spin and ID list instead. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Fix for graph plotting functionality to send spins as list of one spins. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Fix for calling back_calc_r2eff with the new argument keywords, and use list of spin and spin IDs. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Fix for synthetic script calling back_calc_r2eff() with old arguments and to use list of spin containers and IDs. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Inserted last test in test_dx_map_clustered, to check out the written χ2 values are as expected. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Moved the looping over cluster spin IDs into its own function in the API. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Added the selection string for all the cluster IDs to be parsed back as well. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Made the value set function, set values to all spins, if it is a global parameter. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Moved the skipping of protons away from looping function. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Inserted some testing lines for making a dx_map, either global clustered or as a free spin. There is a big difference which dx map you get. It illustrates beautifully the effect of clustering things together. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Added a BMRB NMR-STAR formatted deposition file for the OMP model-free data for reference. This is because there are no other NMR-STAR formatted files in the relax sources.
  • In the dispersion API calculate(), used the API function model_loop() to loop over the clusters instead. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Removed then function loop_cluster_ids() from dispersion API(). This should be implemented elsewhere. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Updated the API set_param_values() function to use model_loop() to get the spin_ids from the cluster. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Initial try to fix unit test test_value_set_r1_rit(). The problem is that no spin ID can be generated since the spins are created manually. "AttributeError: 'MoleculeContainer' object has no attribute '_res_name_count' ". Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Removed the checking of MODEL_LIST_MMQ, and spin.isotope from optimisation.back_calc_r2eff(), since this check is already covered. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Fix for references to "spin" in optimisation.back_calc_r2eff(). Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Fix for looping performed twice in relax_disp API model_loop(). Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Removed unused proton reference in relax_disp API calculate(). There is though some problems with these tests (F 1.93 s for Relax_disp.test_korzhnev_2005_15n_dq_data, F 2.01 s for Relax_disp.test_korzhnev_2005_1h_mq_data, F 1.93 s for Relax_disp.test_korzhnev_2005_1h_sq_data). It is unsure where these comes from. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Fix for epydoc in system test Relax_disp.test_dx_map_clustered.
  • Updated all of the Relax_disp.test_korzhnev_2005_*_data system tests. These now have slightly changed parameter values due to the fix of bug #22563, the NS MMQ 2-site dispersion model running at 32-bit precision and not 64-bit as it should be.
  • Epydoc change for DOI reference in system tests. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Added some test PyMOL scripts to create OpenDX maps and χ2 surface plots. These will go to the wiki: http://wiki.nmr-relax.com/Chi2_surface_plot.
  • Big improvement for running the relax unit tests via the relax command line options. The unit test module path is now accepted as a command line option. This brings more capabilities of Gary Thompson's test_suite/unit_tests/unit_test_runner.py script into the relax command line. The _pipe_control/test_value unit test module path can be specified as, for example, one of 'test_suite.unit_tests._pipe_control.test_value', 'test_suite/unit_tests/_pipe_control/test_value', '_pipe_control.test_value', '_pipe_control/test_value'. This allows individual modules of tests to be run, rather than having to execute all unit tests, which is very useful for debugging.
  • Modified the printouts for the unit tests when running with the --time command line option. The test name is now being processed. The leading 'test_suite.unit_tests.' text is now stripped out. And the remaining text is split into the module name and the test name. This is to allow the unit test module name to be more easily identifiable, so it can then be used as a command line option to allow only a subset of tests to be performed.
  • Modified the help strings for the test suite options shown when 'relax -h' is run. The ability to specify individual tests (or modules of tests for the unit tests) is now documented. The '--time' option help string has also been edited.
  • Fix for the Bmrb.test_bug_22704_corrupted_state_file GUI test. This was failing because the setUp() method in the inherited Bmrb system test module was being overwritten by the default Unittest.setUp() method. Therefore the system test setUp() method has been copied into the GUI test class.
  • Fix for the Test_value.test_value_set_r1_rit test of the _pipe_control.test_value unit test module. This is a general fix for all unit test modules which use the test_suite.unit_tests.value_testing_base.Value_testing_base base class. After the molecules, residues and spins are manually created, the pipe_control.mol_res_spin.metadata_update() function is called to make sure that all of the private and volatile metadata have been correctly created, so that the other pipe_control.mol_res_spin module functions can operate correctly.
  • Removal of repetitive code in the relaxation dispersion model_loop() API method. The spin loop does not need to be called twice, instead the if statements have been modified to better direct the code execution.
  • Added script to simulate dispersion profiles at different settings. This shows that something is wrong. The back-calculated values in the graphs are not equal to the interpolated values. There must be something wrong somewhere. This list shows the χ2 values and, judging from the dispersion graphs, this simply cannot be true.
  • Changed bounds for sample scripts to create: 3D iso-surface plot, surface plot and simulation of dispersion curves.
  • Minor changes to Python matplotlib script, to produce surface plot. Also added the new data for the plotting.
  • Modified the example data, after issue with parameters was fixed.
  • Bugfixes:
  • Fix for two-point calculation of exponential curve with corrupted data. The two-point calculation is now also skipped, if the measured intensity is 0. This can happen for corrupted intensity files.
  • Fix for the internal structural object get_model() method - it now actually returns the model.
  • Fixes for the structure.add_atom user function to allow for list of lists for the atomic position. This allows different coordinates to be supplied for each model.
  • Added safety checks for NaN values to the lib.structure.pdb_write module. This is within the _record_validate() function. The check prevents the creation of invalid PDB files.
  • Fix for the experimental information data pipe object when converting to XML state and results files. This is a partial fix for bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The names and descriptions for the software, citation and script list objects were incorrectly set. These have been fixed so that the name of the data structure and the real description is present in the XML state or results file instead of .
  • Fix for the experimental information data pipe object when converting to XML state and results files. This is a partial fix for bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The names and descriptions for the software, citation and script list objects were incorrectly set. These have been fixed so that the name of the data structure and the real description is present in the XML state or results file instead of .
  • Fix for the cdp.exp_info.software data structure setup. This is a partial fix for bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. The Element data container name was being replaced by the software name, making it impossible to restore from the XML.
  • Implemented the cdp.exp_info.from_xml() method to correctly restore the experimental info structure. This fixes bug #22704, the corrupted relax state files after setting the relax references via the bmrb.software, bmrb.display, or bmrb.write user functions. This custom ExpInfo.from_xml() method is required to properly recreate the software, script and citation list data structures of the cdp.exp_info data structure, as these are special RelaxListType objects populated by Element objects (both from data_store.data_classes).
  • Bug fix for the structure.delete user function. When individual atoms are deleted, the bonded atom data structure is no correctly updated to remove the now non-existent atom.
  • Another bug fix for the structure.delete user function when deleting individual atoms. The bonded atom data structure consisting of indices requires all indices after the deleted atom to be decremented by 1.
  • Bug fix for the CONECT records created by the structure.write_pdb user function. The atom numbers inside the structural object were being used for the CONECT records rather than the atom numbers used within the PDB file.
  • Fix for writing out point files, when only one point is used. The code was testing for > 1 points to be present, before writing out point files. Bug #22753: dx.map does not work when only 1 point is used.
  • Fix for bug #22563, the NS MMQ 2-site dispersion model running at 32-bit precision and not 64-bit as it should be. The numpy.complex64 32-bit types have been replaced by numpy.complex128 in the lib.dispersion.ns_mmq_2site module.
  • Critical fix for kAB not belonging to list of global parameters. kAB was only changed to the spin of interest, but not for the rest of the cluster. When the parameter vector is assembled, "assemble_param_vector(spins=spins)" it takes the global parameter from spin 0. Bug #22754: The minimise.calculate user function does not calculate χ2 value for clustered residues.
  • Improvements for PDB creation in the relax library for out of bounds structural coordinates. The lib.structure.pdb_write module atom() and hetatm() functions will now more gracefully handle atomic coordinates which are outside of the PDB limits of [-999.999, 9999.999]. When such coordinates are encountered, instead of producing a too long PDB line which does not pass the validation step, the functions will set the coordinates to the boundary value. This will at least allow a valid PDB file to be created, despite the warping of the coordinates.
  • Expanded the list of global dispersion parameters in the set_param_values() API method. This is a quick expansion of Troels' fix for the kAB parameter to allow for the release of relax 3.3.1. This is a small part of the discussion at http://thread.gmane.org/gmane.science.nmr.relax.scm/23948/focus=7188.

New in version 3.3.0 (September 23rd, 2014)

  • Features:
  • Huge speed ups for all of the relaxation dispersion models ranging from 1.452x to 163.004x times faster. The speed ups for the clustered spin analysis are far greater than for the single spin analysis.
  • Implementation of a zooming grid search algorithm for optimisation in all analyses. This includes the addition of the minimise.grid_zoom user function to set the zoom level. The grid width will be divided by 2**zoom_level and centred at the current parameter values. If the new grid is outside of the bounds of the original grid, the entire grid will be translated so that it lies entirely within the original.
  • Increased the amount of user feedback for the minimise.grid_search user function. Now a comment for each parameter is included in the printed grid search setup table. This includes if the lower or upper bounds, or both, have been supplied and if a preset value has been used instead.
  • Expanded support for R1rho 2D graph plotting in the relax_disp.plot_disp_curves user function as the X-axis can now be the nu1 value, the effective field omega_eff, or the rotating frame title angle. And the plots are interpolation over the spin-lock offset.
  • Ability to optimise the R1 relaxation rate parameter in the off-resonance relaxation dispersion models.
  • Creation of the relax_disp.r1_fit user function for activating and deactivating R1 fitting in the dispersion analysis.
  • Better tab completion support in the prompt UI for Mac OS X users. For some Python versions, the Mac supplied libedit library is used rather than GNU readline. But this library uses a completely different language and hence tab completion was non-functional on these systems. The library difference is now detected and the correct language sent into libedit to activate tab completion.
  • Created the time user function. This is just a shortcut for printing out the output of the time.asctime() function.
  • The value.copy user function now accepts the force flag to allow destination values to be overwritten.
  • Expanded model nesting capabilities in the relaxation dispersion auto-analysis to speed up the protocol.
  • The spin-lock offset is now included in the spectra list GUI element for the relaxation dispersion analysis.
  • Creation of the relax_disp.r2eff_estimate user function for the fast estimation of R2eff/R1rho values and errors when full exponential curves have been collected. This experimental feature uses linearisation to estimate the R2eff and I0 parameters and the covariance matrix to estimate parameter errors.
  • Gradients and Hessians are implemented for the exponential curve-fitting, hence all optimisation algorithms and constraint algorithms are now available for this analysis type. Using Newton optimisation instead of Nelder-Mead Simplex can save over an order of magnitude in computation time. This is also available in the relaxation dispersion analysis.
  • The minimisation statistics are now being reset for all analysis types. The minimise.calculate, minimise.grid_search, and minimise.execute user functions now all reset the minimisation statistics for either the model or the Monte Carlo simulations prior to performing any optimisation. This is required for both parallelised grid searches and repetitive optimisation schemes to allow the result to overwrite an old result in all situations, as sometimes the original chi-squared value is lower and the new result hence is rejected.
  • Large expansion of the periodic table information in the relax library to include all elements, the IUPAC 2011 standard atomic weights for all elements, mass numbers and atomic masses for all stable isotopes, and gyromagnetic ratios.
  • Significant improvements to the structure centre of mass calculations by using the new periodic table information - all elements are now supported and exact masses are now used.
  • Added a button to the spectra list GUI element for the spectrum.error_analysis user function. This is placed after the 'Add' and 'Delete' buttons and is used in the NOE, R1 and R2 curve-fitting and relaxation dispersion analyses.
  • RelaxErrors are now raised in the prompt or script UI if an old user function is called, printing out the names of the old and new user functions. This is for help in upgrading old scripts and is currently for the calc(), grid_search(), and minimise() user function calls.
  • Changes:
  • Improved model handling for the internal structural object. The set_model() method has been added to allow either a model number to be set to the first unnumbered model (in preparation for adding new models) or to allow models to be renumbered. The logic of the add_model() has also been changed. Rather than looping over all atoms of the first model and copying them, which does not work due to the model validity checks, the entire MolList (molecule list) data structure is copied using copy.deepcopy() to make a perfect copy of the structural data. The ModelList.add_item() method has also been modified to return the newly added or numbered model. This is used by the add_model() structural object method to obtain the model object.
  • Updated the Mac OS X framework setting up instruction document. New sections have been added for the nose and matplotlib Python packages, as nose is needed for the numpy and scipy testing frameworks and matplotlib might be a useful optional dependency in the future. The mpy4py section has been updated to avoid the non-framework fink version of mpicc which cannot produce universal binaries. A few other parts also have small edits.
  • Removed the Freecode section from the release checklist as Freecode has been permanently shut down. The old relax links are still there (http://freecode.com/projects/nmr-relax), but Freecode is dead (http://freecode.com/about).
  • Fix for the internal structural object MolContainer.last_residue() method. This can now operate when no structural information is present, returning 0 instead of resulting in an IndexError.
  • Updated the script for finding unused imports in the relax source code. Now the file name is only printed for Python files which have unused imports.
  • Completely removed all mentions of Freecode from the release document. The old relax links are still there (http://freecode.com/projects/nmr-relax), but Freecode is dead (http://freecode.com/about).
  • Updated the minfx version in the release checklist document to 1.0.8. This version has not been released yet, but it will include important fixes and additions for constrained parallelised grid searches.
  • Fix for a broken link in the development chapter of the relax manual.
  • Fixes for dead hyperlinks in the relaxation dispersion chapter of the relax manual. The B14 model links to http://www.nmr-relax.com/api/3.2/lib.dispersion.b14-module.html were broken as the B in B14 was capitalised.
  • Sent in the verbosity argument value to the minfx.grid.grid_split() function. The minfx function in the next release (1.0.8) will now be more verbose, so this will help with user feedback when running the model-free analysis on a cluster or multi-core system using MPI.
  • The time user function now uses the chronometer Oxygen icon in the GUI.
  • Removed the line wrapping in the epydoc parameter section of the optimisation function docstrings. This is for the pipe_control.minimise module.
  • More docstring line wrapping removal from pipe_control.minimise.
  • Bug fix for the parameter units descriptions. This only affects a few rare parameters. The specific analysis API parameter object units() method was incorrectly checking if the units value is a function - it was checking the parameter conversion factor instead.
  • Modified the align_tensor.init user function so that the parameters are now optional. This allows alignment tensors to be initialised without specifying the parameter values for that tensor.
  • Modified profiling script to have different number of NCYC points per frequency. This is to complicate the data, so any erroneous reshaping of data is discovered. It is expected, that experiments can have different number of NCYC points per spectrometer frequency. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Initial try to alter the target function calc_CR72_chi2. This is the first test to restructure the arrays, to allow for higher dimensional computation. All numpy arrays have to have same shape to allow to multiply together. The dimensions should be [ei][si][mi][oi][di]. [Experiment][spins][spec. frq][offset][disp points]. This is complicated with number of disp point can change per spectrometer frequency. Task #7807: Speed-up of dispersion models for clustered analysis. This implementation brings a high overhead. The first test shows no winning of time. The creation of arrays takes all the time.
  • Temporary changed the lib/dispersion/cr72.py function to unsafe state. This change turns-off all the safety measures, since they have to be re-implemented for higher dimensional structures.
  • Altered profiling script to report cumulative timings and save to temporary files. Task #7807: Speed-up of dispersion models for clustered analysis. This indeed shows that the efficiency has gone down.
  • Added print out of chi2 to profile script. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the creation of special numpy structures outside target function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified profiling script to calculate correct values when setting up R2eff values. This is to test, that the return of chi2 gets zero. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removing looping over exp and offset indices in calc_chi2. They are always 0 anyway. This brings a little speed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • In profiling script, moved up the calculation of values one level. This is to better see the output of the profiling iterations for CR72.py. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for calculation of the Larmor frequency per spin in profiling script. The frq loop should also be up-shifted. It was now extracted as 0.0. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Re-inserted safety checks in lin/dispersion/CR72.py file. This is re-inserted for the rank_1 cases. This makes the unit-tests pass again. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Important fix for extracting the correct shape to create new arrays. If using just one field, or having the same number of dispersion points, the shape would extend to the dispersion number. It would report [ei][si][mi][oi][di] when calling ndarray.shape. Shape always has to be reported as: [ei][si][mi][oi]. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made it easier to switch between single and cluster reporting in profiling script. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Important fix for the creation of the multi dimensional pA numpy array. It should be created as numpy.zeros([ei][si][mi][oi]) instead of numpy.ones([ei][si][mi][oi]). This allows for rapid testing of all dimensions with np.allclose(pA, numpy.ones(dw.shape)). pA can have missing filled out values, when the number of dispersion points are different per spectrometer frequency. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added unit tests demonstrating edge cases 'no Rex' failures of the model 'CR72 full', for a clustered multi dimensional calculation. This is implemented for one field. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Task #7807: Speed-up of dispersion models for clustered analysis.
  • Re-implemented safety checks in lib/dispersion/cr72.py. This is now implemented for both rank-1 float array and of higher dimensions. This makes the unit tests pass for multi dimensional computing. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added unit tests demonstrating edge cases 'no Rex' failures of the model 'CR72 full', for a clustered multi dimensional calculation. This is implemented for three fields. This is to implement catching of math domain errors, before they occur. These tests cover all parameter value combinations which result in no exchange: Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed that special numpy structure is also created for "CR72". This makes most system tests pass. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Critical fix for the slicing of values in target function. This makes system test: Relax_disp.test_sod1wt_t25_to_cr72 pass. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added self.has_missing keyword in initialization of the Dispersion class. This is to test once, per spin or cluster. This saves a looping over the dispersion points, when collection the data. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Created multi dimensional error and value numpy arrays. This is to calculate the chi2 sum much faster. Reordered the loop over missing data points, so it is only initiated if missing points is detected. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Switch the looping from spin->frq to frq->spin. Since the number of dispersion points are the same for all spins, this allows to move the calculation of pA and kex array one level up. This saves a lot of computation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed all the creation of special numpy arrays to be of float64 type. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the data filling of special numpy array errors and values, to initialization of Dispersion class. These values does not change, and can safely be stored outside. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Just a tiny little more speed, by removing temporary storage of chi2 calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made copies of numpy arrays instead of creating from new. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added a self.frqs_a as a multidimensional numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Small fix for the indices to the errors and values numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Lowered the number of iterations to the profiling scripts. This is to use the profiling script as bug finder. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the calculation of dw_frq out of spin and spectrometer loop. This is done by having a special 1/0 spin numpy array, which turns on or off the values in the numpy array multiplication. The multiplication needs to first axis expand dw, and then tile the arrays according to the numpy structure. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the calculation of pA and kex out off all loops. This was done by having two special 1/0 spin structure arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed dw_frq_a numpy array, as it was not necessary. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed all looping over spin and spectrometer frequency. This is the last loop! Wuhu. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Reordered arrays for beauty of code. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the back_calc array be initiated as copy of the values array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Small edit to profiling script, to help bug finding. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fixed that arrays are correctly initiated with one or zero values. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Very important fix, for only replacing part of data array which have Nan values. Before, all values were replaced, which was wrong. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Needed to increase the relative tolerance when testing if pA array is 1. Now system test Relax_disp.test_hansen_cpmg_data_missing_auto_analysis passes. Also added some comments lines, to prepare for mask replace of values. For example if only some of etapos values should be replaced. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Restored profiling script to normal. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the logic and comments much clearer about how to reshape, expand axis, and tile numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Implemented a masked array search for where "missing" array is equal 1. This makes it possible to replace all values with this mask, from the value array. This eliminates the last loops over the missing values. It took over 4 hours to figure out, that the mask should be called with mask.mask, to return the same fulls structure, Task #7807: Speed-up of dispersion models for clustered analysis.
  • Yet another small improvement for the profiling script. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed the multi dimensional structure of pA. pA is not multi-dimensional, and can just be multiplied with numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for testing of pA in lib function, when pA is just float. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified unit tests, so pA is sent to target function as float instead of array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed the multi dimensional structure of kex. kex is not multi-dimensional, and can just be multiplied with numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for testing of kex in lib function, when kex is just float. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified unit tests, so kex is sent to target function as float instead of array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Important fix for replacing values if eta_pos > 700 is violated. This fixes system test: Relax_disp.test_sod1wt_t25_to_cr72, which failed after making kex to a numpy float. The trick is to make a numpy mask which stores the position where to replace the values. Then replace the values just before last return. This makes sure, that not all values are changed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Increased the kex speed to 1e7 in clustered unit tests cases. This is to demonstrate where there will be no excange. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added a multi-dimensional numpy array chi2 value calculation function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Called the newly created chi2 function to calculate for multi dimensional numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Renamed chi2_ND to chi2_rankN. This is a better name for representing multiple axis calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made special ei, si, mi, and oi numpy structure array. This is for rapid speed-up of numpy array creation in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced self.spins_a with self.disp_struct. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made initialisation structures for dw. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Initial try to reshape dw faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Switched to use self.ei, self.si, self.mi, self.oi, self.di. This is for better reading of code. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Comment out the sys.exit(), which would make the code fail for wrong calculation of dw. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Copied profiling script for CPMG model CR72 to R1rho DPL94 model. The framework of the script will be the same, but the data a little different. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Started converting profiling script to DPL94. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced self.(ei,si,mi,oi,di) with self.(NE,NS,NM,NO,ND). These numbers represents the maximum number of dimensions, instead of index. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added the ei index, when creating the first dw_mask. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Reordered how the structures dw init structures are created. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Clearing the dw_struct before calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Started using the new way of constructing dw. This is for running system tests. Note, somewhere in the dw array, the frequencies will be different between the two implementations. But apparently, this does not matter. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Inserted temporary method to switch for profiling. Task #7807: Speed-up of dispersion models for clustered analysis.
  • First try to speed-up the old dw structure calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Simplified calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Yet another try to implement a fast dw structure method. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Implemented the fastest way to calculate the dw structure. This uses the numpy ufunc multiply.outer function to create the outer array, and then multiply with the frqs_structure. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Renamed dw temporary structure to generic structure. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Restructured the calculation of R20A and R20B to the most efficient way. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the lib/CR72.py to a numpy multi dimensional numpy array calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the catching when dw is zero, to use masked array. Implemented backwards compatibility with unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Bugfix for testing if kex is zero. It was tested if kex was equal 1.0. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Implemented masked replacement if fact is less that 1.0. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced isnan mask with function that catches all invalid values.
  • Removed the masked replacement if fact is less than 1.0. This is very strange, but otherwise system test: Relax_disp.test_hansen_cpmg_data_missing_auto_analysis would fail. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed the slow allclose() function to test if R20A and R20B is equal. It is MUCH faster to just subtract and check sum is not 0.0. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced the temporary variable R2eff with back_calc, and used numpy subtract to speed up. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the lib function into a pure numpy array calculation. This requires, that r20a, r20b and dw has same dimension as the dispersion points. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changes too unit tests, so data is sent to target function in numpy array format. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed the creation of an unnecessary structure by using numpy multiply. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the mask which finds where to replace values into the _init_ function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Copied profiling script for CR72 to B14 model. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified profiling script for the B14 model. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified model B14 lib file to faster numpy multidimensional mode. The implementations comes almost directly from the CR72 model file. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Reverted the use of the mask "mask_set_blank". It did not work, and many system tests started failing. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the target function to handle the B14 model for faster numpy computation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed unit test for B14 to match numpy input requirement. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added additional tests in b14, when math errors can occur. This is very easy with a conditional masked search in arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Comment fix for finding when E0 is above 700 in lib function of B14. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed use of "asarray", since the variables are already arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed target function for model CR72. To CR72 is now also the input of the parameters of R20A, R20B and dw. dw is tested for zero, to return flat lines. It is faster to search in the smaller numpy array, than the 5 dimensional dw array. This is for speed-up. R20A and R20B is also subtracted, to see if the full model should be used. In the same way, it is faster to subtract the smaller array. These small tricks are expected to give 5-10 pct. speeed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the lib function of CR72 accept the R20A, R20B and dw of the original array. This is for speed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed unit-tests, to send in the original R20A, R20B and dw_orig to the testing of the lib function CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed profiling script to send R20A, R20B and dw, as original parameters to the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed target function for model B14. To B14 now also send the input of the original parameters dw. dw is tested for zero, to return flat lines. It is faster to search in the smaller numpy array, than the 5 dimensional dw array. This is for speed-up. These small tricks are expected to give 5-10 pct. speed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the lib function of B14 accept dw of the original array. This is for speed-up. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed unit-tests, to send in the original dw_orig to the testing of the lib function B14. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed profiling script to send dw as original parameters to the lib function B14. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Copied profiling script for CR72 model to TSMFK01 model. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified profiling script to be used for model TSMFK01. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified target function for model TSMFK01, to send in dw as original parameter. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified lib function for model TSMFK01 to accept dw_orig as input and replaced functions to find math domain errors into maske replacements. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made unit tests for model TSMFK01 send in R20A and dw as a numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Large increase in speed for model TSMFK01 by changing target functions to use multidimensional numpy arrays in calculation. This is done by restructuring data into multidimensional arrays of dimension [NE][NS][NM][NO][ND], which are number of spins, number of magnetic field strength, number of offsets, maximum number of dispersion point. The speed comes from using numpy ufunc operations. The new version is 2.4X as fast per spin calculation, and 54X as fast for clustered analysis.
  • Replacing math domain checking in model DPL94, with masked array replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
  • First try to speed up model DPL94. This has not succeeded, since system test: Relax_disp.test_dpl94_data_to_dpl94 still fails. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Trying to move some of the structures into its own part. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for forgetting to multiply frqs to power 2. This was found by inspecting all print out before and after implementation. New implementation of DPL94 now passes all system and unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the expansion of the R1 structure out of the for loops. This is to speed-up the _init_ of the class of the target function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the packing of errors and values out of for loop in the _init_ class of target function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the multi dimensional expansion of inv_relax_times out of for loop. This can be done for all structures, which does not have missing points. Task #7807: Speed-up of dispersion models for clustered analysis.
  • For inv_relax_times, expanded one axis, and tiled up to NR spins, before reshaping and blowing up to full structure. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the expansion of frqs out of for loops. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Documentation fix for description of input arrays to lib functions. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Converted TAP03 model to use multi dimensional numpy arrays. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made dw in unit tests of TAP03 be of numpy array. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced the loop structure in target function of TAP03 with numpy arrays. This makes the model faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Reordered the initialization structure of the special numpy arrays. This was done in the init part of the target function of relaxation dispersion. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added model MODEL_TSMFK01 also get self.tau_cpmg calculated in init part. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model TP02, has been replaced with numpy masks. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in dw as numpy array in unit tests of model TP02. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model TP02, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for adding model TP02 to part of init class to initialize preparation of higher dimension numpy structures. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the NOREX model a faster numpy array calculation. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed an unnecessary frq_struct in init of target function. frqs can just be expanded, and back_calc is cleaned afterwards with disp_struct. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model M61, has been replaced with numpy masks. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in r1rho_prime and phi_ex_scaled as numpy array in unit tests of model M61. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model M61, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model M61b, has been replaced with numpy masks. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in r1rho_prime and dw as numpy array in unit tests of model M61b. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model M61b, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points to be send to lib function of model TSMFK01. These are not used anymore. Also removed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points and pB to be send to lib function of model TP02. Number of points are not used anymore. pB is calculated in lib function instead. Also removed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points and pB to send to lib function of model TP02. pB is calculated in lib function instead. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points, pB, k_AB, k_BA to be send to lib function of model B14. Number of points are not used anymore. pB is calculated in lib function instead. k_AB, and k_BA are calculated in lib functions instead. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending number of points in target function of TSMFK01. This was removed in lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points, pB, to be send to lib function of model TAP03. Number of points are not used anymore. pB is calculated in lib function instead. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points, to be send to lib function of model CR72. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points, to be send to lib function of model DPL94. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points, to be send to lib function of model M61. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points, to be send to lib function of model M61b. Number of points are not used anymore. Fixed in target function. Fixed in lib function. Fixed in corresponding unit tests. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model MP05, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. Calculation of pB, has been moved to lib function for simplicity. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in dw as numpy array in unit tests of model MP05. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model MP05, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model LM63, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in number of points in unit tests of model LM63. Task #7807: Speed-up of dispersion models for clustered analysis. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model LM63, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for replacement of values with mask, when phi_ex is zero. This can be spin specific. System test: Relax_disp.test_hansen_cpmg_data_to_lm63 starts to fail: Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in r20 and phi_ex as numpy array in unit tests of LM63. This is after using masks as replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
  • 1 digit decrease in parameter check in system test: Relax_disp.test_hansen_cpmg_data_to_lm63. It is unknown, why this has occurred. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model IT99, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in r20 and dw as numpy array in unit tests of IT99. This is after using masks as replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model IT99, to use higher dimensional numpy array structures. That makes the model much faster. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model ns_cpmg_2site_expanded, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. k_AB and k_BA is also now calculated here. Documentation is also fixed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for sending in r20 and dw as numpy array in unit tests of ns_cpmg_2site_expanded. This is after using masks as replacement. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced target function for model ns_cpmg_2site_expanded, to use higher dimensional numpy array structures. That makes the model much faster. I cannot get system test: Relax_disp.test_cpmg_synthetic_dx_map_points to pass. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for system test Relax_disp.test_cpmg_synthetic_dx_map_points. By just copying self.back_calc_a to self.back_calc, problem was solved. In specific_analysis.relax_disp.optimisation in function back_calc_r2eff(), the function gets the last values stores in the class function. This is in "class Disp_result_command(Result_command)" with self.back_calc = back_calc. And back_calc_r2eff() have return model.back_calc. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Methods to replace math domain errors in model ns_cpmg_2site_3d, has been replaced with numpy masks. Number of points has been removed, as the masks utility replaces this. pB is now moved to be calculated inside. This makes the lib function simpler. k_AB and k_BA is also now calculated here. Magnetization vector is also now filled in lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for unit tests of model NS CPMG 2-site 3D to the reduced input to the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Change to the target function to the model NS CPMG 2-site 3D to use the reduced input to the lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed linked matrix/vector inner products into chained dot expressions. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Wrote the essential dot matrix up to be initiated earlier. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Lowered the number of dot iterations, by pre-prepare the dot matrix another round. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Turned Mint vector into a 7,1 matrix, so dimensions fit with evolution matrix. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Lowered the number of dot operations, by pre-preparing the evolution matrix another round. The power is in system tests always even. The trick to removing this for loop, would be to make a general multi dot function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the bulk operation of model NS CPMG 2-site 3D into the lib file. This is to keep the API clean. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the unit test of NS CPMG 2-site 3D, after the input to the function has changed. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the target function for NS CPMG 2-site 3D. This reflects the new API layout. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the lib function of NS CPMG 2-site star, to get input of dw and r20a+r20b of higher dimensional type. This is to move the main operations from the target function to the lib function, and make the API code clean and consistent. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the target function of NS CPMG 2-site star, to reflect the input to the function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Made the dot evolution structure faster for NS CPMG 2-site 3D. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Implemented the BLAS method of dot product, which should be faster. I cannot get the "out" argument to work. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Small fix for the dot method. But the out argument does not work. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Implemented the dot method via blas. This needs a array with one more axis. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Last try to use the out argument. In the last dotting loop, the out argument wont work, no matter what I do. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Inner product fix in model NS CPMG 2-site 3D. Fix for system tests: Relax_disp.test_cpmg_synthetic_ns3d_to_b14, Relax_disp.test_cpmg_synthetic_ns3d_to_CR72, Relax_disp.test_cpmg_synthetic_ns3d_to_CR72_noise_cluster. The number of dotting with Mint, should correspond to the power. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced the temporary structure self.frqs_a to self.frqs, which works for all target functions. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Replaced the temporary structure self.cpmg_frqs_a to self.cpmg_frqs, which works for all target functions. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Restructured all data structures into higher dimension in target function. Fix for the input to the different models. Restructured how to detect the number of offset and dispersion points. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Various index fixes, after the data structures have been reordered. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for unit test, where the dimension of points has to be one lower. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for plotting, since the back_calc now can hold more data points that cpmg frequencies. This is because the numpy array has been expanded to the maximum number of points. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Implemented a frqs_squared calculation in the init of target function. This is to speed up the calculations. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Restructured frqs_H to higher dimension in target function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the calculation of dw and dwH out of for loops for model MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed looping over spin and frequencies for model MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Temporary removed check for dw = 0.0 in MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed number of points to be parsed to model MMQ CR72. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Removed power to be parsed to MMQ CR72, since it is not used. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed MMQ CR72 to use multi dimensional data. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed unit test of MMQ CR72 to pass. dw needs to be of numpy structure. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the calculation of dw out of for loops for model NS MMQ 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Modified lib function for NS MMQ 2-site, to have looping over spins and frequencies inside lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fixed the use of higher dimensional data in NS MMQ 2-site SQ DQ ZQ. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for documentation in NS MMQ 2-site/SQ/DQ/ZQ/MQ. Now explains which dimension data should be in. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the reshaping of dw and dwH, since it is not dependent on experiment. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Changed the calculation of inner product in model NS CPMG 2-site 3D. The out argument of numpy.dot is buggy, and should not be used. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Added missing instances of cleaning the data. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Bug fix for model LM63 3-site. The index si has to be used to extract data to lib function. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Temporary added system test: test_korzhnev_2005_all_data_disp_speed_bug. This makes a minimisation with 1 iteration, and so will give the chi2 value at the preset parameter values. This is chi2 value should give 162.5, but gives 74.7104. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Updated documentation on dimensionality of numpy array num_points. They are in dimension [NE][NS][[NM][NO], where oi gives the number of dispersion points at that offset. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Fix for system test: test_korzhnev_2005_all_data. The masking for replacing values was wrong. Task #7807: Speed-up of dispersion models for clustered analysis.
  • Moved the cleaning of data points and replacing of values of out loop for model NS MMQ 2-site. Task #7807: Speed-up of dispersion models for clustered analysis.

New in version 3.1.5 (February 5th, 2014)

  • Changes:
  • Updated the interatom.unit_vectors user function description to add the text '3D structure'. This is in response to the http://thread.gmane.org/gmane.science.nmr.relax.user/1547 relax-users mailing list message and the change is to clarify the usage of the user function.
  • Created the Noe.test_bug_21591_noe_calculation_fail system test. This is to catch bug #21591 submitted by Martin Ballaschk (https://gna.org/users/mab). This is the complete failure of the NOE analysis. The peak lists attached to the bug report have been included in the test suite to create the system test.
  • Improvements for the steady-state NOE analysis overfit_deselect() method. The spin deselection which occurs at the start of the calc user function call, used to calculate the NOE, is now clearer. Each deselection condition is now explained in detail and the text is now far more informative. In addition, the special condition of all spins being deselected is now caught. If this happens, a RelaxError is raised to prevent the user from going forwards. This should remove confusion as to why the output file is empty.
  • Bugfixes:
  • Fix for bug #21591, the complete failure of the NOE analysis. This bug was reported by Martin Ballaschk (https://gna.org/users/mab). The issue was introduced in the fix for bug #21562. The problem is that the overfit_deselect() method was deselecting all spins with two data points or less rather than one or less.

New in version 3.1.4 (February 1st, 2014)

  • Features:
  • The structure.write_pdb user function now supports multiple molecules being present.
  • Large speed optimisations for the internal structural object when multiple models are present.
  • Improved support for replicated spectra in the NOE analysis.
  • Changes:
  • Created the Frame_order.test_generate_rotor2_distribution system test. This is to test the Frame Order distribution generating base script, used for creating the synthetic Frame Order test data, and to demonstrate a failure in handling back-calculated RDC data. To implement this, the test_suite/shared_data/frame_order/cam/ path has been converted into a Python package (with the addition of the _init_.py files). The base data generation script test_suite/shared_data/frame_order/cam/generate_base.py has also been modified to use the absolute path for the data files and its run() method now accepts the save_path argument to allow the files to be saved into a temporary directory.
  • Fixes for the Frame_order.test_generate_rotor2_distribution system test. The test_suite/shared_data/frame_order/cam/generate_base.py script now saves the program state files into the self.save_path directory, preventing the system test from attempting to save files into the relax test suite directories.
  • Another fix for the Frame_order.test_generate_rotor2_distribution system test. The test_suite/shared_data/frame_order/cam/generate_base.py script no longer prints its progress indicator to sys.__stderr__ but to sys.stderr instead. This avoids the progress text from appearing during the relax test suite execution.
  • Created the Structure.test_bug_21522_master_record_atom_count system test. This is designed to catch bug #21522, the structure.write_pdb user function creating an incorrect MASTER record. This hence also catches bug #21520, the failure of the structure.write_pdb user function when creating the MASTER record due to too many ATOM and HETATM records being present. The test simply creates two structural models, adds one atom, and writes out a PDB file, checking its contents.
  • The structure.write_pdb user function can now handle a file instance for the file argument. This is for the Structure.test_bug_21522_master_record_atom_count system test, to allow a dummy file object to be used. This can also be useful for power users.
  • Created the lib.geometry.vectors.unit_vector_from_2point() function. This is used to quickly calculate the unit vector between two points.
  • The lib.structure.represent.rotor.rotor_pdb() function can now handle multiple rotors. Previously this function would fail if called twice with the same structural object.
  • Added the has_molecule() method to the relax internal structural object. This is used to quickly check if a molecule name already exists in the structural object.
  • More improvements for handling multiple rotors in the lib.structure.represent.rotor.rotor_pdb() function. The atom numbering is now better handled.
  • Better support for the writing out of multiple molecules by the structure.write_pdb user function. This is for the internal structural object write_pdb() method. Now each molecule is assigned a different chain ID in the PDB file, and the chain IDs loaded into the structural object are ignored. The chain IDs should however be preserved when using structure.read_pdb followed by structure.write_pdb, without storing the ID. A number of the Structure system tests had to be updated, as now the relax generated PDB files will always write out a chain ID.
  • Large speed up for the internal structural object for when many models are present. The new ModelList.current_models object keeps track of all the models already present in the structural object. This simplifies the checks of the pack_structs() internal structural object method by removing expensive looping. This allows the loading of PDB files to continue to be fast even with many tens or hundreds of thousands of models already loaded.
  • More speed ups for the internal structural object when huge numbers of models are present. Another loop over the structural_data object has been eliminated from the PDB reading load_pdb() method.
  • Another optimisation for the internal structural object for large numbers of models. The ModelList.add_item() method no longer loops over all models to check if a model is already present, instead using the new current_models list.
  • Yet more optimisation for handling large quantities of models in the internal structural model. Now when adding new models to the object, the model_indices and model_list objects are no longer created. This saves much time as the large model_list is now not sorted. A number of structural object methods have been updated to handle the change by switching to the model_loop() method for looping over the models, rather than using the model_indices and model_list objects.
  • The frame order matrix printing function can now output the matrix to any precision. The lib.frame_order.format.print_frame_order_2nd_degree() function now accepts the 'places' argument which allows for higher precision printouts.
  • The behaviour of the rdc.write user function has been changed to output spin ID strings in single quotes. This is to avoid problems with the '#' molecule identifier and the '#' comment character.
  • Fix for the diffusion_tensor.init user function reference in the intro chapter of the manual. This was using a very old and now non-functional syntax.
  • Created the Diffusion_tensor.test_bug_21561_tensor_pdb_failure system test. This is to catch bug #21561, as reported by Martin Ballaschk (https://gna.org/users/mab). This catches the failure of the structure.create_diff_tensor_pdb user function for non-spherical diffusion tensors when no Monte Carlo simulations are present.
  • Added the truncated data for creating a system test to catch bug #21562. This bug was reported by Dhanas Muthu (https://gna.org/users/dhanas) and is the failure of the NOE analysis when spectra are replicated. This consists of the Sparky peak lists attached to the bug report and the modified 2AT7 PDB file. The data has been truncated to only include residues :12, :13, and :14.
  • Shifted the NOE system test script into the new 'noe' directory.
  • Created the Noe.test_bug_21562_noe_replicate_fail system test. This is to catch bug #21562, reported by Dhanas Muthu (https://gna.org/users/dhanas). This is the failure of the NOE analysis when spectra are replicated. This uses the truncated data taken from the files attached to the bug report. The NOE output file is checked to see if the contents are correct.
  • Better support for replicated spectra in the NOE analysis. The saturated and reference peak intensity and error are now properly averaged. Previously averaging was not used as the number of replicates N are cancelled in the ratios used for the NOE and error calculation. However this fails when the number of replicates for the saturated spectrum does not match the number of replicates for the reference spectrum. Now any data combination is possible.
  • Another fix for the NOE analysis for when replicated spectra have been collected. Variance averaging rather than error averaging is now used for the peak intensity errors. This is important if the errors for each replicated spectra are different - a case which is rarely encountered as the replicates are almost always used to determine one error for all the replicates.
  • Bugfixes:
  • Fix for bug #21499, the failure of the rdc.write user function. The rdc.write user function fails for back-calculated RDC data. The fix was to handle the missing interatom.rdc_data_types variable.
  • Fix for bug #21522 and bug #21520. These bugs are the structure.write_pdb user function creating an incorrect MASTER record and the failure of the structure.write_pdb user function when creating the MASTER record due to too many ATOM and HETATM records being present. The counts for the ATOM, HETATM, and TER records are now only for a single model, rather than being the sum for all models together.
  • Fix for bug #21561, the structure.create_diff_tensor_pdb user function failure with no simulations. Bug #21561 was reported by Martin Ballaschk (https://gna.org/users/mab). The problem was that the simulation axes of the tensor PDB file were not being initialised correctly when no Monte Carlo simulations had been run.
  • Fix for bug #21562, the failure of the NOE analysis when spectra are replicated. This bug was reported by Dhanas Muthu (https://gna.org/users/dhanas). The problem was that the NOE overfit_deselect() method was deselecting all spins which do not have exactly 2 intensity values. This is incompatible with replicated spectra as the number will be greater than two. The check has been modified to deselect spins only when the number of intensity values are zero or one.

New in version 3.1.3 (January 17th, 2014)

  • Changes:
  • Fix for the parameters listed for the IT99 dispersion model in the manual.
  • Improvements and addition of many links to the lib.dispersion.cr72 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.cr72-module.html.
  • Spacing fix for the lib.dispersion.cr72 module docstring.
  • Improvements and addition of many links to the lib.dispersion.dpl94 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.dpl94-module.html.
  • Improvements and addition of many links to the lib.dispersion.it99 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.it99-module.html.
  • Improvements and addition of many links to the lib.dispersion.lm63_3site API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.lm63_3site-module.html.
  • Improvements and addition of many links to the lib.dispersion.lm63 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.lm63-module.html.
  • Improvements and addition of many links to the lib.dispersion.m61b API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.m61b-module.html.
  • Improvements and addition of many links to the lib.dispersion.m61 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.m61-module.html.
  • Improvements and addition of many links to the lib.dispersion.mmq_cr72 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.mmq_cr72-module.html.
  • Improvements and addition of many links to the lib.dispersion.mp05 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.mp05-module.html.
  • Improvements and addition of many links to the lib.dispersion.ns_cpmg_2site_3d API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_cpmg_2site_3d-module.html.
  • Epydoc URL simplifications.
  • Improvements and addition of many links to the lib.dispersion.ns_cpmg_2site_expanded API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_cpmg_2site_expanded-module.html.
  • Improvements and addition of many links to the lib.dispersion.ns_cpmg_2site_star API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_cpmg_2site_star-module.html.
  • Added the 'NS CPMG 2-site 3D full' model to the lib.dispersion.ns_cpmg_2site_3d module docstring.
  • Improvements and addition of many links to the lib.dispersion.ns_mmq_2site API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_mmq_2site-module.html.
  • Improvements and addition of many links to the lib.dispersion.ns_mmq_3site API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_mmq_3site-module.html.
  • Improvements and addition of many links to the lib.dispersion.ns_r1rho_2site API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_r1rho_2site-module.html.
  • Improvements and addition of many links to the lib.dispersion.ns_r1rho_3site API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.ns_r1rho_3site-module.html.
  • Small docstring edit for the lib.dispersion.mp05 module.
  • Improvements and addition of many links to the lib.dispersion.tap03 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.tap03-module.html.
  • Improvements and addition of many links to the lib.dispersion.tp02 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.tp02-module.html.
  • Epydoc URL simplifications in the lib.dispersion.mp05 module.
  • Epydoc docstring edit in the lib.dispersion.mmq_cr72 module.
  • Improvements and addition of many links to the lib.dispersion.tsmfk01 API documentation. This is for the API documentation at http://www.nmr-relax.com/api/3.1/lib.dispersion.tsmfk01-module.html.
  • Copyright notice updates for the lib.dispersion modules changed today.
  • Added links to the relax wiki, API documentation, and relax website to all dispersion models in the manual. This is to make it easier to find additional information about each of the models.
  • Updated the author list for the submitted paper for the relaxation dispersion analysis.
  • Added the primary reference for relaxation dispersion in relax to the dispersion chapter of the manual. This is the paper which is not published yet.
  • Removed the single quantum R1rho-type data reference in the introduction of the dispersion chapter of the manual. This is redundant as R1rho data is always single quantum.

New in version 3.1.2 (January 14th, 2014)

  • Changes:
  • The average_intensity() dispersion function now accepts the offset argument. This is for better support of combined offset and spin-lock varied R1rho-type data. The argument is then passed into the find_intensity_keys() function.
  • Improved the DPL94 dispersion model description in the manual.
  • Copied a Sparky peak list to be modified to be a Sparky file without intensity column.
  • Modified the Sparky file to have no columns with intensity values.
  • Implemented to read spins from a SPARKY list, when no intensity column is present. Addition to Support Request sr #3044 - load spins from Sparky list.
  • Created the Relax_disp.test_bug_21460_disp_cluster_fail system test. This is to catch bug #21460 reported by Min-Kyu Cho. The save file added to the repository consists solely of the data for the first residue.
  • Speed ups for the Relax_disp.test_bug_21460_disp_cluster_fail system test. The optimisation precision is not important for demonstrating this bug.
  • Updated the main copyright notice for 2014.
  • Fix for the main copyright notice.
  • Updated the copyright notice visible to the user to 2014.
  • Updated the copyright for the relax GUI splash screen for 2014.
  • Improvement for the relax test suite printout with the --time command line argument flag. The tests printed out now have the package and module names removed, so that one the test name remains. This removes a large amount of text, simplifying the printout.
  • Bugfixes:
  • Partial fix for bug #21338 - the bad sRGB profile in some PNGs. This is only partial as some files are still to be converted (the original Bruker logo, and the 16x16, 22x22 and 32x32 sized Bruker icons).
  • Fix for bug #21460, the failure of relaxation dispersion due to incorrect spectrometer information. This is the bug https://gna.org/bugs/?21460, reported by Min-Kyu Cho. There was only one place in the dispersion analysis which failed due to a spectrometer frequency not containing any relaxation data - in the insignificance testing in the auto-analysis.
  • Loosened the chi2 check in the Relax_disp.test_korzhnev_2005_15n_mq_data system test. This is to allow the test to pass on a 32-bit Linux (Mageia 1) test system.

New in version 3.1.1 (December 11th, 2013)

  • Features:
  • Support for reading 3D structures of organic molecules from Gaussian log files using the new structure.read_gaussian user function.
  • Addition of the lib.periodic_table module for storing information about the periodic table.
  • Addition of the lib.nmr module for basic NMR related functions. It currently has functions for converting between ppm, Hz, and rad.s^-1 units.
  • Many improvements to the relaxation dispersion chapter of the user manual.
  • The 'NS MMQ 3-site linear' numeric model - the model for 3-site exchange using 3D magnetisation vectors linearised with kAC = kCA = 0 with the parameters {R20, ..., pA, pB, dw_AB, dw_BC, dwH_AB, dwH_BC, kex_AB, kex_BC}.
  • The 'NS MMQ 3-site' numeric model - the model for 3-site exchange using 3D magnetisation vectors with the parameters {R20, ..., pA, pB, dw_AB, dw_BC, dwH_AB, dwH_BC, kex_AB, kex_BC, kex_AC}.
  • The 'NS R1rho 3-site linear' numeric model - the model for 3-site exchange using 3D magnetisation vectors linearised with kAC = kCA = 0 with the parameters {R1rho', ..., pA, pB, dw_AB, dw_BC, kex_AB, kex_BC}.
  • The 'NS R1rho 3-site' numeric model - the model for 3-site exchange using 3D magnetisation vectors with the parameters {R1rho', ..., pA, pB, dw_AB, dw_BC, kex_AB, kex_BC, kex_AC}.
  • More model nesting in the relaxation dispersion auto-analysis ('CR72' and 'MMQ CR72', 'LM63' and 'LM63 3-site').
  • Large speed up of the 'TP02' and 'NS R1rho 2-site' dispersion models by minimising repetitive calculations.
  • Support for the loading of spins directly from peak lists.
  • Support for the reading of peak intensities from NMRPipe seriesTab formatted files (*.ser).
  • Changes:
  • Small improvement for the devel_scripts/log_converter.py script for detecting commit boundaries.
  • Added many small details to the release checklist document. This is for the formatting and editing of the CHANGES file, which is used for the release announcements. Some additional details about the API documentation at http://www.nmr-relax.com/api have been added too.
  • Added sectioning printouts for the relaxation dispersion auto-analysis. This simply tells the user which part of the protocol is currently being performed.
  • Setup for testing the sample_scripts/relax_disp/R1rho_analysis.py sample script. The script was copied into the test_suite/shared_data/dispersion/r1rho_off_res_tp02/ data directory where it will be tested on real data. The 'fake_sequence.in' and 'unresolved' files have been created to allow the script to run. And the script itself has been heavily debugged.
  • All of the relaxation dispersion auto-analysis options are now exposed by the sample scripts. This included the pre_run_dir argument for specifying a directory of results from a non-clustered analysis and the flag for running MC simulations for all models.
  • Added the DATA_PATH variable to the cpmg_analysis.py dispersion sample script. This allows the user to more easily specify a different directory for the files.
  • Docstring improvement for the test_suite/shared_data/dispersion/r1rho_off_res_tp02/R1rho_analysis.py script.
  • Synchronised the test_suite/shared_data/dispersion/Hansen/relax_disp.py with the sample script. This script now matches very closely with the sample_scripts/relax_disp/cpmg_analysis.py sample script. This is for sample script debugging purposes.
  • Created a base data pipe for Flemming Hansen's truncated CPMG data for testing out missing data. The :4 spin is missing just a few data points, whereas the :71 spin is missing all 800 MHz data.
  • Created the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. This is used to demonstrate a failure in the 'R2eff' model when some data is missing.
  • Expansion and fixes for the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test. The parameters for spin :4 are now being checked, and all the checks updated for the changed data. The parameter values are slightly different as data is now missing and because only 3 spins are used for the error analysis whereas in all other Hansen CPMG data sets the more accurate errors are from all spins.
  • The lib.dispersion.cr72.r2eff_CR72() function is now more robust. Values less than 1.0 are now caught to avoid passing it into the numpy.arccosh() function. This avoids many warning messages on Mac OS X.
  • Added a Gaussian DFT optimisation log file to the shared data directories. This will be used to test the reading of structural data from Gaussian files.
  • Modified the Relax_disp.test_hansen_cpmg_data_missing_auto_analysis system test to catch another failure. This is the failure of all numeric models when all data from one magnetic field strength is missing for a spin.
  • Created data for a 'NS MMQ 3-site (branched)' model using cpmg_fit from Dmitry Korzhnev.
  • The relax_disp.r2eff_read_spin user function now really strips comments and empty lines from the file.
  • A big change to the usage of the relax_disp.r2eff_read_spin user function. Now the nu_CPMG frequency or the spin-lock field strength must be set prior to calling this user function. This allows for more flexibility as often the experiment IDs and frequency values in the files do not match to the same number of decimal places. The frequency is no longer read from the file but must be preset.
  • Created a relax script for back calculating R2eff values for the same parameters as cpmg_fit. This is for the 'NS MMQ 3-site (branched)' CPMG dispersion model. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
  • Created the Relax_disp.test_ns_mmq_3site_branched system test. This is for the 'NS MMQ 3-site (branched)' CPMG dispersion model. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_test_suite.
  • Added the 'NS MMQ 3-site' models to the dispersion variables. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site (linear)' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_the_model_to_the_list.
  • Added another Gaussian log file of strychnine, this time with DFT structure optimisation. The file is bzip2 compressed to save space.
  • Created the Structure.test_read_gaussian_strychnine system test. This will be used for implementing and testing the structure.read_gaussian user function.
  • Created the lib.periodic_table module for storing information about the periodic table. This is via the periodic_table object which will have different methods for obtaining different information about an element.
  • Implemented the structure.read_gaussian user function. This will read the final structural data out of a Gaussian log file.
  • Improved the checking of the Structure.test_read_gaussian_strychnine system test. This now checks all the atomic information loaded.
  • Simple fix for the Relax_disp.test_korzhnev_2005_*_data system tests. The CPMG frequencies are now being set up in the setup_korzhnev_2005_data() method.
  • Added support for the 'NS MMQ 3-site' model parameters to the lib.text.gui module. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site (linear)' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax.
  • Added the 'NS MMQ 3-site' models to the relax_disp.select_model user function frontend. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site (linear)' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_front_end.
  • Added support for the 'NS MMQ 3-site' models to the relax_disp.select_model user function back end. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site (linear)' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_disp.select_model_user_function_back_end.
  • Added support for the new 3-site exchange dispersion parameters. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site (linear)' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#Adding_support_for_the_parameters.
  • Removed the brackets from the 'NS MMQ 3-site (linear)' dispersion model name.
  • Renamed the Relax_disp.test_ns_mmq_3site_branched system test to Relax_disp.test_ns_mmq_3site.
  • Fixes for the loop_parameters() dispersion function for the new 'NS MMQ 3-site' model parameters. The new parameters were not being handled by this function.
  • Created the target functions for the 'NS MMQ 3-site' models. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site (linear)' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_target_function.
  • Added the R2eff calculating functions for the 'NS MMQ 3-site' models to the relax library. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site linear' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_library.
  • Added the 'NS MMQ 3-site' models to the dispersion auto-analysis. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site linear' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_auto-analysis.
  • Added the 'NS MMQ 3-site' models to the GUI model list. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site linear' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_GUI.
  • Updated the 'MMQ 2-site' model description in the manual. The R2_DQ = R2_ZQ = R20 assumption is now explained.
  • Added the 'NS MMQ 3-site' models to the relax user manual. This is for the 'NS MMQ 3-site' and 'NS MMQ 3-site linear' CPMG dispersion models. This follows the tutorial for adding relaxation dispersion models at http://wiki.nmr-relax.com/Tutorial_for_adding_relaxation_dispersion_models_to_relax#The_relax_manual.
  • Completed the 'MMQ 2-site' documentation in the manual. The equations for the numeric evolution of SQ, ZQ and DQ data was missing.
  • Huge speed ups of the relaxation dispersion analysis. This is due to the removal of huge inefficiencies in the loop_point(), return_cpmg_frqs() and return_spin_lock_nu1() functions of the specific_analysis.relax_disp.disp_data module. Two new functions return_cpmg_frqs_single() and return_spin_lock_nu1_single() have been introduces to pull out the nu_CPMG and spin-lock field strengths for a given experiment and spectrometer frequency. This avoids calling the loop_exp() and loop_frq() functions from within loop_point() which itself is often called inside a loop_exp() and loop_frq() sequence.
  • Added the results of cpmg_fit minimisation of the cpmg_fit synthetic data for the 'NS MMQ 3-site' model.
  • Fixes for the 'NS MMQ 3-site' dispersion models - the evolution matrix is now correctly constructed.
  • Another fix for the 'NS MMQ 3-site' dispersion models. The creation of the Z-matrix had a copy and paste error in that the heteronuclear chemical shift sign was negated when it should be positive. This was only in one of the two chemical shift numbers.
  • Loosened the chi-squared check of the Relax_disp.test_ns_mmq_3site system test to allow it to pass.
  • Speed up of the Relax_disp.test_ns_mmq_3site system test. The relax_disp.plot_disp_curves user function call is now skipped as it takes too long.
  • Renamed the 'ns_mmq_3site_branched' dispersion test data directory to 'ns_mmq_3site'.
  • Created the Relax_disp.test_ns_mmq_3site_linear system test and modified Relax_disp.test_ns_mmq_3site. The Relax_disp.test_ns_mmq_3site_linear system test uses the old data from the directory test_suite/shared_data/dispersion/ns_mmq_3site/, as this had kAC = 0, now copied into the ns_mmq_3site_linear/ directory. This system test uses the 'NS MMQ 3-site linear' model. The base data generated by cpmg_fit for the Relax_disp.test_ns_mmq_3site system test was modified so that kAC is no longer 0, but set to 1000. This should properly test the 'NS MMQ 3-site' model.
  • Renamed the 'MMQ 2-site' model to 'NS MMQ 2-site'. This is so that the name matches those of the 'NS MMQ 3-site linear' and 'NS MMQ 3-site' models.
  • Renamed all remaining instances of 'MMQ 2-site' to 'NS MMQ 2-site'. This is simply changing variable, method and module names.
  • Removed the 'MMQ 3-site branched' and 'MMQ 3-site linear' models from the to do list in the manual. These two dispersion models are now implemented.
  • Renamed the 'MQ CR72' dispersion model to 'MMQ CR72'. The model is designed by Korzhnev et al., 2004 for proton-heteronuclear SQ, ZQ, DQ, and MQ data (or MMQ data), so the change is logical as the model is not just for MQ data.
  • Clean up of the 'NS R1rho 3-site' model names in the manual. The word 'branched' has been removed and the notation now matches the 'NS MMQ 3-site' models.
  • Clean up of the parameter lists in the dispersion model table of the manual.
  • The pC parameter constraints are now implemented for the 3-site dispersion models. The new constraints are 0