What's new in Proxmox Virtual Environment 6.1

Dec 4, 2019
  • Debian Buster 10.2 and a Linux Kernel 5.3
  • QEMU 4.1.1, LXC 3.2, Corosync 3.0, ZFS 0.8.2
  • Ceph 14.2.4 (Nautilus)
  • GUI: many new data center configuration options
  • Change network settings via GUI without reboot (ifupdown2)
  • New HA migrate option - Set noVNC scale mode in "My settings"
  • TOTP and U2F as two factor authentication
  • Container: pending changes
  • SPICE enhancements
  • and so many more...

New in Proxmox Virtual Environment 6.0 (Jul 23, 2019)

  • Debian Buster 10 and a Linux Kernel 5.0
  • QEMU 4.0, LXC 3.1.0, Corosync 3.0.2
  • Proxmox cluster stack with Corosync 3 using Kronosnet
  • Ceph 14.2 (Nautilus) and many new functionalities in the Ceph management dashboard
  • QEMU live migrate disks backed by local storage
  • Encryption support for Ceph OSD and ZFS
  • and much more...

New in Proxmox Virtual Environment 5.4 (Apr 12, 2019)

  • Installing Ceph via user interface with the new wizard – Integrated into the Proxmox VE software stack since 2014 the distributed storage technology Ceph comes with own packages and support from the Proxmox team. The configuration of a Ceph cluster has already been available via the web interface, now with Proxmox VE 5.4 the developers have brought the installation of Ceph from the command line to the user interface making it extremely fast and easy to setup and configure a hyper-converged Proxmox VE/Ceph cluster. Additionally, enterprise on a budget can use commodity off-the-shelf hardware allowing them to cut costs for their growing data storage demands.
  • Greater Flexibility with High Availability improvements – Proxmox VE 5.4 provides new options to set the HA policy data center-wide, changing the way how guests are treated upon a node shutdown or reboot. This brings greater flexibility and choice to the user. The policy choices are:
  • - Freeze: always freeze services—independently of the shutdown type (reboot, poweroff).
  • - Fail-over: never freeze services—this means a service will get recovered to another node if possible and if the current node doesn’t come back up in the grace period of one minute.
  • - Default: this is the current behavior—freeze on reboot but do not freeze on poweroff.
  • Suspend to disk/hibernation support for Qemu/KVM guests – With Proxmox VE 5.4 users can hibernate Qemu guests independent of the guest OS and have them resumed properly on the next restart. Hibernation saves the RAM contents and the internal state to permanent storage. This allows users to preserve the running state of their qemu-guests across most upgrades to and reboots of the PVE-node. Additionally it can speed up the startup of guests running complex workloads, and also workloads which need lots of resources at initial setup, but free them later on.
  • Security: Support for U2F Authentication – Proxmox VE 5.4 supports the U2F (Universal 2nd Factor) protocol which can be used in the web-based user interface as an additional method of two-step verification for users. The U2F is an open authentication standard and simplifies the two-factor authentication. Since it is required in certain domains and environments this is an important improvement to security practices. The new U2F authentication and the TOTP second factor authentication can be configured by each user by themselves without needing a ‘User.Modify’- permission.
  • Improved ISO installation wizard – The Proxmox VE ISO installation wizard has been optimized offering the ability to go back to a previous screen during the installation. Users can adapt their choices made, without the need to restart the complete installation process. Before the actual installation, a summary page will be displayed containing all relevant information.
  • Improved Qemu Guest creation wizard - As often requested by the Proxmox community some options like for example Machine-type (q35, pc-i440fx), Firmware (Seabios, UEFI), or SCSI controller can now be selected directly in the VM creation wizard, and dependent options get set to sensible values directly.

New in Proxmox Virtual Environment 5.3 (Dec 5, 2018)

  • Proxmox VE and CephFS:
  • Proxmox VE 5.3 now includes CephFS in its web-based management interface thus expanding its comprehensive list of already supported file and block storage types. CephFS is a distributed, POSIX-compliant file system and builds on the Ceph cluster. Like Ceph RBD (Rados Block Device), which is already integrated into Proxmox VE, CephFS now serves as an alternative interface to the Ceph storage. For CephFS Proxmox allows storing VZDump backup files, ISO images, and container templates. The distributed file system CephFS eliminates the need for external file storage such as NFS or Samba and thus helps reducing hardware cost and simplifies management.
  • The CephFS file system can be created and configured with just a few clicks in the Proxmox VE management interface. To deploy CephFS users need a working Ceph storage cluster and a Ceph Metadata Server (MDS) node, which can also be created in the Proxmox VE interface. The MDS daemon separates metadata and data from each other and stores them in the Ceph file system. At least one MDS is needed, but its recommended to deploy multiple MDS nodes to improve high availability and avoid SPOF. If several MDS nodes are created only one will be marked as ‘active’ while the others stay ‘passive’ until they are needed in case of failure of the active one.
  • Further Improvements in Proxmox VE 5.3:
  • Proxmox VE 5.3 brings many improvements in storage management. Via the Disk management it is possible to easily add ZFS raid volumes, LVM, and LVMthin pools as well as additional simple disks with a traditional file system. The existing ZFS over iSCSI storage plug-in can now access LIO target in the Linux kernel. Nesting is enabled for LXC containers making it possible to use LXC or LXD inside a container. Also, access to NFS or CIFS/Samba server can be configured inside containers. For the keen and adventurous user, Proxmox VE brings a simplified configuration of PCI passthrough and virtual GPUs (vGPUs such as Intel KVMGT)–now even possible via the web GUI.
  • Countless bugfixes and smaller improvements are listed in the release notes and can be found in detail in the Proxmox bugtracker or in the Git repository.

New in Proxmox Virtual Environment 5.2 (May 16, 2018)

  • Cloud-Init support for automating VM provisioning:
  • Proxmox VE 5.2 now supports Cloud-Init, a multi-distribution package that handles initial setup of a virtual machine as it boots for the first time, and allows provisioning of VMs that have been deployed based on a template. With the Cloud-Init package Proxmox users can easily configure host names, add SSH keys, set up mount points or run post-install scripts via the graphical user interface. It also enables for example automation tools like Ansible, Puppet, Chef, Salt, and others to access pre-installed disk images and copy a new server from that.
  • SMB/CIFS Storage Plug-in:
  • The flexible Proxmox VE storage model now integrates a SMB/CIFS storage plug-in manageable via the web interface. CIFS, as well as NFS, are the primary file systems used in network attached storage (NAS). CIFS is the "Common Internet File System" used by Windows operating systems for file sharing and it allows to connect to Windows file servers or other SMB-compatible servers with the SMB/CIFS backend.
  • Let’s Encrypt Certificate Management via GUI:
  • With the new version 5.2 Proxmox users can now manage their Let’s Encrypt certificates via the Proxmox interface, easing the work of administrators significantly. Let’s Encrypt is an automated and open certificate authority (CA) providing free, digital certificates needed to enable secure HTTPS (SSL/TLS) for websites. Proxmox users already have been able to create Let’s Encrypt certificates since version 4.2, now they can issue and renew the certificates with two simple clicks via the web interface.
  • Proxmox VE 5.2 delivers numerous additive features for improved usability, scalability, and security including:
  • Creation of clusters via the graphical user interface. This feature makes creating and joining nodes to a Proxmox cluster extremely simple and intuitive even for novice users.
  • Expanded functionality of LXC: Creating templates or moving disks from one storage to another now also work for LXC. The move-disk function can be used for stopped/paused containers and instead of backup/restore.
  • If the QEMU guest agent is installed, the IP address of a virtual machine is displayed on the GUI
  • Administrators can now easily create and edit new roles via the GUI.
  • Setting I/O limits for restore operations is possible (globally or more fine-grained per storage) to avoid I/O load getting too high while restoring a backup.
  • Configuration of ebtables in the Proxmox VE Firewall.

New in Proxmox Virtual Environment 5.0 (Jul 4, 2017)

  • New Proxmox VE Storage Replication Stack:
  • Replicas provide asynchronous data replication between two or multiple nodes in a cluster, thus minimizing data loss in case of failure. For all organizations using local storage the Proxmox replication feature is a great option to increase data redundancy for high I/Os avoiding the need of complex shared or distributed storage configurations.
  • With Proxmox VE 5.0 Ceph RBD becomes the de-facto standard for distributed storage. Packaging is now done by the Proxmox team. The Ceph Luminous is not yet production ready but already available for testing.
  • We also have a simplified procedure for disk import from different hypervisors. You can now easily import disks from VMware, Hyper-V, or other hypervisors via a new command line tool called ‘qm importdisk’.
  • Other new features are the live migration with local storage via QEMU, added USB und Host PCI address visibility in the GUI, bulk actions and filtering options in the GUI and an optimized NoVNC console.
  • And as always we have included countless bugfixes and improvements on a lot of places.

New in Proxmox Virtual Environment 4.4 (Dec 14, 2016)

  • Ceph dashboard:
  • The new Ceph dashboard gives the administrator a comprehensive overview of the Ceph status, the Ceph monitors, the Ceph OSDs, and the current performance and utilization of the Ceph cluster. Together with the existing disk management the new dashboard simplifies the ease-of-use and administration of Ceph storage and paves the way to the complete software-defined data center.
  • Unprivileged Container:
  • The creations of unprivileged containers moves from the command line to the GUI. Many LXC templates for various operating system have been updated. Another improvement of the new version Proxmox VE 4.4 is the CPU Core Limitation which helps distributing the performance between containers. The new function Container Restart Migration will help with server moves and maintenance work on the host.
  • High Availability Stack:
  • The Proxmox VE HA Stack brings new functions, many of them have been implemented and improved in the HA Web Interface at the suggestion of the community: The two tabs "resource" and "HA status" have been merged. A consistent view for the current HA status with the ability to edit and add HA resources has been added, and a new HA group editor now allows users to set priorities directly from the GUI. All details of these changes and improvements in the HA stack are already accessible in the reference documentation directly via the web interface (via the help button).
  • Another new feature in Proxmox VE 4.4 is a dedicated live migration network (via command line).

New in Proxmox Virtual Environment 4.3 (Sep 27, 2016)

  • The new version of Proxmox VE 4.3 comes with a completely new comprehensive reference documentation. The new docu framework allows a global as well as contextual help function. Proxmox users can access and download the technical documentation via the central help-button (available in various formats like html, pdf and epub). A main asset of the new documentation is that it is always version specific to the current user’s software version. Opposed to the global help, the contextual help-button shows the user the documentation part he currently needs.
  • The new reference documentation is created like the former Proxmox technical documentaion: manpages are autogenerated based on the code, help content itself is written by the developers via comments in the code. Then for generating the docu the Proxmox VE project uses asciiDoc. To participate in the documentation project users can send a patch to the open source projet for proposing new content. The former documentation, the Proxmox VE wiki, stays public, links to the reference documentation, and hosts all the howtos, use cases, etc.
  • The updated vertical GUI structure is one of the main advancements made to the GUI of Proxmox VE 4.3. Proxmox developers re-arranged some of the horizontal menus in the GUI framework Sencha ext JS 6 introduced with Proxmox version 4.2 and they are now vertical. This structure alsoallowed the Proxmox developers to build groups, add icons, and optimize the logical navigation structure. The flat design of the sencha theme made this step essential as menus weren’t very well recognizable. The vertical structure now shows every menu in a single line.
  • In the newly added groups content is unfolded and displayed by default. The same menus as in the old structure are shown. Users can choose to fold or unfold groups. For minimal supported displays the new vertical structure now provides more space in total.
  • More new features in the GUI:
  • New status overview for host, VM, container and storage.
  • Signal colors have been added and show for example when capacity of a CPU is used.
  • Diskmanagment: new disk overview, including S.M.A.R.T., and wearout for enterprise SSDs. Diskmanagement is new to the GUI; it used to be only accesible via Ceph and command line.
  • Defaults for VM creation: The wizard “Create VM” now proposes optimal settings dependent to the selected Linux-based operating system. For example, the default for Linux is “virtio scsi disk”.
  • Open console with double-click on VM or container (allow popups first)
  • Search function in the GUI (“ctrl-shift-f”)
  • “Task log”-window remembers its window size
  • Proxmox VE is a bare-metal ISO installer, based on latest Debian Jessie 8.6 combined with a long term Linux 4.4 kernel, based on Ubuntu 16.04 LTS (Xenial Xerus) with LXC 2.0. Proxmox VE 4.3 has also seen many bug fixes and optimizations, for example with snapshots and rollback and LVM thin.

New in Proxmox Virtual Environment 4.2 (Apr 28, 2016)

  • The new Sencha Ext JS 6 framework brings a modern 'flat design' look and feel to the Proxmox VE GUI with a reworked icon set providing consistency and an optimal user experience. In the 'Summary View' data information is now visualized in graphs which gives the user an overview of all his performance data. Browser variations are handled automatically so that the charts always display correctly. The graphs provide enhanced interactive features such as click zoom. With the new Ext JS framework more settings moved from the command line to the GUI such as LXC mount point options or syslog date filtering. Translations for French, German, Italian, and Norwegian were updated within the new GUI, which is translated into 19 languages in total.
  • New LVM-thin and ZFS improvements help increase storage utilization:
  • Proxmox VE is a bare-metal ISO installer, based on latest Debian Jessie 8.4 combined with a long term Linux 4.4 kernel, based on Ubuntu 16.04 LTS (Xenial Xerus). With Proxmox VE 4.2 logical volumes can now be thin provisioned and therefore the Proxmox bare-metal ISO installer offers LVM-thin or ZFS with just one click. A storage administrator can dedicate more capacity to virtual machines than they have and create logical volumes that are larger than the available extents. This ability offers great flexibility and can be expanded dynamically when needed and can help avoiding the need to purchase additional storage because of unused, over-allocated storage.
  • Another new feature is that ZFS storage can now be selected during the installation process. If the admin chooses ZFS, the ZFS-Plugin will be automatically configured out-of-the-box and the user does not have to re-configure it later.
  • New SSL certificates with “Let's Encrypt”:
  • Proxmox VE 4.2 now works with the free web server certificates from “Let’s Encrypt”. “Let’s Encrypt” is a free, automated, and open certificate authority providing free SSL certificates with short expiry and automated renewal. This significantly simplifies the process of getting and maintaining a certificate for secure websites with simple commands. The objective of “Let’s Encrypt” and the ACME protocol is to make it possible to set up an HTTPS server and have it automatically obtain a browser-trusted certificate, without any human intervention. This is accomplished by running a certificate management agent on the web server.

New in Proxmox Virtual Environment 4.1 (Dec 11, 2015)

  • The recent release is based on the latest Debian Jessie and on the 4.2.6 kernel with LXC and QEMU 2.4.1. Based on the feedback from the Proxmox community and customers, countless small improvements and bugfixes went into the product. Included are better ZFS integration for the ISO installer, better startup and shutdown behavior, disk resizing for LXC containers, and also several LXC technology previews like for example support for unprivileged container or LVM thin support.
  • All TurnKey GNU/Linux V14 Appliances are now available as LXC templates.

New in Proxmox Virtual Environment 4.0 (Oct 7, 2015)

  • Debian Jessie 8.2 and 4.2 Linux kernel
  • Linux Containers (LXC)
  • IPv6 support
  • Bash completion
  • New Proxmox VE HA Manager

New in Proxmox Virtual Environment 4.0 Beta 2 (Sep 10, 2015)

  • Countless improvements for LXC, especially the integration in our storage model
  • Migration path from OpenVZ to LXC
  • Linux Kernel 4.2
  • Ceph Server packages (0.94.x - hammer release)
  • Embedded NoVNC console
  • Improved IPv6 support
  • Countless bug fixes

New in Proxmox Virtual Environment 4.0 Beta 1 (Jun 24, 2015)

  • The Proxmox VE HA Manager (pve-ha-manager), the new resource manager for the high availability cluster is one of the main new features. The pve-ha-manager, developed by the Proxmox team, replaces the former rgmanager. The HA manager monitors all virtual machines and containers on the cluster and automatically gets into action if one of them fails. It works out of the box, and additionally watchdog-based fencing simplifies deployments dramatically. The whole HA settings are configured via GUI.
  • This beta version also comes with a brand-new Proxmox HA Simulator allowing users to learn and test all the functionality of the Proxmox VE HA solution prior to going into production.
  • Proxmox VE 4.0 will be the first version to include Linux containers (LXC). The new container solution for Proxmox VE will be fully integrated into the Proxmox VE frameworks, e.g. this includes also the storage plugins. It works with all modern and latest Linux kernels.
  • Also integrated in this beta version are the first stable DRBD9 packages. DRBD9 is perfectly suited for high performance workloads, especially when high IOPS are required.

New in Proxmox Virtual Environment 3.4 (Feb 19, 2015)

  • Highlights are the integrated ZFS file system, a ZFS storage plug-in, hotplug and NUMA support (non-uniform memory access), all based on latest Debian Wheezy 7.8. The Proxmox developers considered many user feature requests and added many GUI improvements like start/stop all VMs, migrate all VMs or disconnect virtual network cards.
  • The integrated ZFS (OpenZFS) is an open source file system and logical volume manager in one, allowing huge storage capacities. Starting with the new ISO installer for Proxmox VE 3.4, users can now select their preferred root file system during installation (ext3, ext4 or ZFS). All ZFS raid levels can be selected, including raid-0, 1, or 10 as well as all raidz levels (z-1, z-2, z3). ZFS on Proxmox VE can be used either as a local directory, supporting all storage content types (instead of ext3 or ext4) or as zvol block-storage, currently supporting KVM images in raw format (with the new ZFS storage plugin).
  • Using ZFS allows advanced setups for local storage like live snapshots and rollbacks but also space and performance efficient linked templates and clones. The ZFS storage plugin in Proxmox VE 3.4 complements already existing storage plugins like Ceph or the ZFS for iSCSI, GlusterFS, NFS, iSCSI and others.
  • The new hot plugging feature for virtual machines allows installing or replacing virtual hard disks, network cards or USB devices while the server is running. If hot plug is not possible, the new “pending changes” (marked now in red) show that the changes need a power off to be applied - the admin always overviews the actual status of his changes.

New in Proxmox Virtual Environment 3.3 (Sep 16, 2014)

  • improved security features:
  • Firewall support (new package pve-firewall)
  • Two-Factor Authentication (Yubico and OATH)
  • pve-manager (GUI) updates:
  • new Proxmox VE Firewall
  • noVNC console
  • openvz: add bridge vlan && firewall options to gui
  • new Proxmox VE Mobile, GUI for mobile devices
  • add new 'Pool View'
  • ZFS storage can now be configured on GUI
  • glusterfs: new option to specify backup volfile server
  • add new email_from option to datacenter.cfg
  • add Persian (Farsi) translation.
  • improved Spanish translation
  • update Chinese translation
  • Countless updates and fixes
  • update to qemu 2.1.0:
  • pci passthrough improvements
  • hotplug improvements
  • migration : enable auto-converge capability
  • add cpu_hotplug (and maxcpus config)
  • add virtio-net multiqueue support
  • new option smbios1 to specify SMBIOS type 1 fields
  • set uuid for newly created machines
  • support new q35 machine type
  • add Broadwell cpu model
  • compile with new libiscsi (1.12.0)
  • use glusterfs 3.5.2 libraries
  • support drive option 'discard'
  • add support for new qemu throttling burst max parameters
  • add 'vmxnet3', 'lsi53c810', and 'pvscsi' to the list of available network card models
  • improved Console support:
  • HTML5 Console for shell, VM and container console (noVNC)
  • noVNC console is now the default
  • vncterm: new option -notls (for novnc, which use 'wss')
  • vncterm: updated signature for java applet to avoid warnings
  • pve-kernel-2.6.32-32-pve: 2.6.32-136:
  • update aacraid, arcmsr, netxtreme2, ixgbe, igb, megaraid_sas and e1000e drivers
  • update to vzkernel-2.6.32-042stab093.4.src.rpm
  • allow to use grub-efi-ia32 boot loader
  • pve-kernel-3.10.0-4-pve: 3.10.0-17:
  • enable vfio xfga
  • update arcmsr, netxtreme2, ixgbe, igb, e1000e drivers
  • update to kernel-3.10.0-123.6.3.el7.src.rpm
  • allow to use grub-efi-ia32 boot loader
  • Note: there is still no OpenVZ support with this kernel
  • update ceph packages to 'firefly' (0.80.5):
  • Note: Please upgrade ceph packages first if you run ceph server on proxmox nodes (see ceph upgrade instructions).
  • update gluster packages to 3.5.2
  • fence-agents-pve: 4.0.10:
  • update to 4.0.10
  • add fence_ovh and fence_amt
  • remove baytech, bullpap, cpint, egenera, mcdata, nss_wrapper,rackswitch, vixel, xcat. Those agents are no longer included in upstream package.
  • removed fence_scsi
  • Note: This includes updates for fence_ipmilan (fence_ilo3, fence_ilo4, fence_imm, and fence_idrac), and some parameter names changed (see 'man fence_ipmilan'). Please verify that your fence device still works if you use HA.
  • based on Debian Wheezy 7.6
  • countless bug fixes and package updates, for all details see bugtracker and GIT

New in Proxmox Virtual Environment 3.2 (Mar 13, 2014)

  • Improved SPICE support:
  • spiceterm: console for OpenVZ and host
  • add new console option to datacenter.cfg (java applet vs. spice)
  • add multi-monitor support
  • GUI: use split-button to easily select SPICE or VNC
  • more details on http://pve.proxmox.com/wiki/SPICE
  • Update qemu to 1.7.0:
  • add 'pvscsi' to the list of scsi controllers (emulate the VMware PVSCSI device)
  • add 'lsi53c810' to the list of scsi controllers (supported on some very old Windows NT versions)
  • add 'vmxnet3' to the list of available network card models (emulate VMware paravirtualized network card)
  • add drive option 'discard'
  • add support for new qemu throttling burst max parameters
  • improved live backup
  • pve-kernel-2.6.32-27-pve: 2.6.32-121:
  • update to vzkernel-2.6.32-042stab084.20.src.rpm
  • update e1000, igb, ixgbe, netxtreme2, megaraid_sas
  • include latest ARECA RAID drivers
  • update Broadcom bnx2/bnx2x drivers to 7.6.62
  • update aacraid to aacraid-1.2.1-30300.src.rpm
  • Ceph Server (Technology Preview):
  • new GUI to manage Ceph server running on PVE nodes
  • more details on http://pve.proxmox.com/wiki/Ceph_Server
  • added Open vSwitch support (Technology Preview)
  • Optional 3.10 Kernel (based on RHEL7 beta, currently without OpenVZ support, for testing only)
  • storage: new ZFS plugin (Technology Preview), see http://pve.proxmox.com/wiki/Storage:_ZFS
  • storage: remove nexenta plugin (ZFS plugin is faster)
  • updated GlusterFS to 3.4.2
  • ISO installer uses now always GPT partition table:
  • added 'gdisk' to manage and view partitions via CLI
  • based on Debian Wheezy 7.4
  • countless bug fixes and package updates (for all details see bugtracker and GIT

New in Proxmox Virtual Environment 3.1 (Aug 21, 2013)

  • We just released Proxmox VE 3.1, introducing great new features and services. We included SPICE, GlusterFS storage plugin and the ability to apply updates via GUI (including change logs).
  • As an additional service for our commercial subscribers, we introduce the Proxmox VE Enterprise Repository. This is the default and recommended repository for production servers.

New in Proxmox Virtual Environment 2.3 (Mar 5, 2013)

  • update qemu-kvm to 1.4.0
  • new kvm backup implementation, see Backup and Restore
  • added RBD (ceph) support on GUI
  • update kernel to vzkernel-2.6.32-042stab072.10.src.rpm
  • include latest Broadcom bnx2/bnx2x drivers
  • include latest Adaptec aacraid driver 1.2-1[29900]
  • update e1000e to 2.2.14
  • update igb to 4.1.2
  • update ixgbe to 3.12.6
  • enable CONFIG_RT_GROUP_SCHED (also update corosync if you install this kernel)
  • extend memory GUI to support ballooning
  • implement auto-ballooning
  • add HD resize feature to expand disks
  • updated network drivers (bnx2/bnx2x/e1000e/igb/ixgbe)
  • added omping binaries (for testing multicast between nodes)
  • update to latest Debian version 6.0.7
  • qcow2 as default storage format, cache=none (previously raw)
  • KVM64 as default CPU type (previously qemu64)
  • e1000 as default NIC (previously rtl8139)
  • task history per VM
  • Node Summary: added "KSM sharing" and "CPU Socket count"
  • enable/disable tablet for VM on GUI without stop/start of VM (you can use vmmouse instead, for lower CPU usage, works on modern Linux and on all Windows VMs as long as you install the vmmouse drivers)
  • bug fixes (for all details see bugtracker and GIT

New in Proxmox Virtual Environment 2.2 (Oct 24, 2012)

  • update kernel to vzkernel-2.6.32-042stab062.2.src.rpm
  • update Intel nics drivers (e1000e to 2.1.4, ixgbe to 3.11.33, igb to 4.0.17)
  • update qemu-kvm to 1.2.0
  • openvz: update vzctl to 4.0
  • openvz: use real console instead of 'vzctl enter'
  • add live snapshot support (qcow2)
  • added Slovenian translation
  • kvm: new option to select SCSI controller hardware
  • kvm: support up to 32 network devices
  • kvm: support up to 16 virtio devices
  • kvm: add SATA to GUI
  • updated cluster packages
  • update to latest Debian version 6.0.6
  • bug fixes (for all details see bugtracker and GIT

New in Proxmox Virtual Environment 2.0 RC1 (Feb 17, 2012)

  • Role based user- and permission management for all objects (VM´s, storages, nodes, etc.)
  • Support for multiple authentication sources
  • Microsoft Active Directory
  • LDAP
  • Linux PAM
  • Proxmox VE internal authentication
  • New Kernel, based on vzkernel-2.6.32-042stab049.6.src.rpm
  • vzdump uses now LZO compression by default (faster)
  • Countless bug fixes

New in Proxmox Virtual Environment 2.0 Beta 2 (Nov 29, 2011)

  • Backup and Restore:
  • GUI and CLI, works for OpenVZ containers and KVM VM´s
  • "Backup Now" via GUI
  • Complete new backup scheduler
  • All jobs can be monitored as “Recent tasks”
  • vzdump package is obsolete, all code went into pve-manager package
  • OpenVZ:
  • Multiple storages for OpenVZ container, no limit to /var/lib/vz anymore!!!
  • vswap support
  • Improved init.log (shows startup logs of a OpenVZ container)
  • KVM monitor:
  • Simple File Manager
  • Upload ISO images and templates via browser
  • No size limit anymore, you can upload DVD iso to all storage types via browser, even BD works
  • Works on all nodes, e.g. if you are connected on node1 you can upload images to storage on node2
  • Logging:
  • Syslog viewer complete reworked, auto refresh
  • Clusterwide “Recent tasks”, major improvements
  • “Right-click” menus
  • Starting and shutdown container and VM´s
  • Open VNC Console
  • VNC Console:
  • Can be opened also for non-running container and VM´s (easier to debug startup issues for KVM guests)
  • Browser support:
  • Initial support for IE, Firefox and Chrome are preferred (use always latest versions)
  • Kernel:
  • Based on vzkernel-2.6.32-042stab042.1.src.rpm
  • Countless small bug fixes