Performance Decline on "Madison" Server (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • As you may have noticed, the server "Madison" performs better than in the past 2 days, but still not as good as before the recently applied system updates.

    The performance decline is caused by multiple factors:

    • The recent updates include the patch that covers the Spectre/Meltdown vulnerabilities. These patches are reported to cause a decline in performance (New Spectre variant 4: Our patches cause up to 8% performance hit, warns Intel). Intel says that it's an 8% performance hit, but we've seen various reports of up to 15% performance decline.
    • Over the past 2 years, we've been quite permissive and tolerant with accounts that have been overusing their assigned resources. We've even silently increased resources - at some point up to 200% more than advertised on our website - because the server could handle the traffic and usage of all websites just fine. Unfortunately, this tolerance turns out to be quite problematic now, as those accounts that are traffic and resource-intensive are having a much more significant impact now with the recent CPU limitation.
    • The MySQL/MariaDB server uses the most CPU and memory. While the server has plenty of memory, the CPU limitation causes MySQL/MariaDB to perform slower, causing the websites to load slower as well. We have done a few optimizations, but unfortunately, it doesn't do wonders and the SQL queries from websites put a higher load on the server.
    • The server reboot requires a full block scan on the next R1Soft Server Backups backup replication. This process is very resource-intensive and took over 9 hours to complete, from Sunday night until noon. Now the CDP backup agent is getting back in sync and should use less resources once it's done. We expect this to be finished by Monday afternoon.

     

    What we're doing to resolve this issue permanently:

    Technically, since we've optimized everything as much as possible on the Madison server, there's nothing more that can be done at the moment. Therefore, we've decided to deploy a new shared hosting server with Dual Intel Xeon Gold 5118 CPUs that should handle the load better.

    Once the new server is ready (within 1 or 2 days), we will transfer the top 20 most resource-intensive accounts to the new server to ease off the Madison server and balance the load. This should be the step that will resolve this issue already.

    Furthermore, we will deploy MySQL Governor on both servers, which will prevent abusive accounts from overloading the database server. The MySQL processes will run inside the accounts' LVE container, so if a website overuses MySQL, their account will be the only one that will be affected by their heavy SQL usage.

    Another step will be the switch from PHP 5.6 to 7.1 as the system PHP version. As PHP 5.6 nears its end of life in December 2018, the switch to PHP 7.1 is inevitable anyway. PHP 7 is known to be more resource-friendly and it should improve performance a bit more. We will announce the switch to PHP 7.1 soon. It will probably be done on the 1st of November, 2018.

    Once we complete these steps, we believe that the websites will perform at least the same or maybe even better than before. The 20 most resource-intensive accounts will be moved from the Madison server within 1 to 3 days and the rest of the measures will be applied within the next 2 or 3 weeks. We'll post updates here regarding our progress.

    If your account is among those 20 most resource-intensive accounts, we will inform you about the transfer to the new server at least 1 hour in advance. There should be no action required from your side, other than updating the domain's nameservers or IPs. The DNS propagation will be shortened by reducing the TTL (Time to Live) to 5 minutes and by updating the DNS records on the Madison server as well.

    We're honestly sorry about the inconvenience caused and kindly ask for your patience while we complete these steps to resolve the performance issues permanently. Thank you!

    Update 14.10.18 23:22 CEST: All MySQL/MariaDB databases of all accounts are being optimized at the moment. This is a time- and resource-intensive process, which should be completed within 8 hours. Databases that are fragmented should perform a bit better after the optimization.

    Update 15.10.18 00:39 CEST: The database optimizations have been successfully completed. We see a very positive impact on performance at the moment, but it's too early to say whether this is a temporary effect or permanent. We'll keep monitoring the Madison server and still work on setting up the new server. Updates will follow.

    Update 15.10.18 00:49 CEST: The R1Soft backup replication will start in 10 minutes and might cause temporary high load. This time it's an incremental replication and should only take about 4 hours.

    Update 15.10.18 00:51 CEST: Recommendation: WordPress websites can be optimized further by installing and enabling the LiteSpeed Cache for WordPress plugin. We've granted free access to LSCache to all accounts recently and it can boost the performance of your website significantly. More information about it here: https://www.litespeedtech.com/products/cache-plugins/wordpress-acceleration

    Update 15.10.18 10:23 CEST: The performance issues are still continuing, unfortunately, now that traffic has started to increase for many websites. We are still working on the above solutions.

    Update 15.10.18 12:05 CEST: The issue might possibly be related to CloudLinux and the network drivers/configuration, as uploads to the server are very fast, while downloads are very slow. This happens on a different CloudLinux server as well, but not on servers running CentOS, so it might be the CloudLinux software or a related component causing the issue. We're looking into this now and will contact CloudLinux for assistance.

    Update 15.10.18 12:50 CEST: The CloudLinux system admins are on the server now and have started their investigation.

    Update 15.10.18 15:18 CEST: CloudLinux insist that the issue isn't caused by their software. Our system admins are continuing to investigate.

    Update 15.10.18 16:52 CEST: To rule out the web server (LiteSpeed Web Server) as a potential cause, we are planning to re-install it using the default settings. Short website outages of a few minutes might occur during this process.

    Update 15.10.18 17:24 CEST: Before going with the LSWS reinstallation, we have been monitoring the statistics result and still see System (Kernel) CPU usage is high compared to other usages including user usage. We have updated the CloudLinux support once again with this result and is waiting for their update.

    Update 15.10.18 18:20 CEST: The CloudLinux ticket has been escalated to Level 3 support (highest, most experienced level). The average response time of this department is 1 or 2 business days. Since we can't leave things this way so long, we will proceed with the above plan of re-installing LSWS, setting up a new server and moving some accounts to it. This should be completed tonight.

    We kindly ask for more patience. The issue must and will be resolved.

    Update 15.10.18 22:17 CEST: The setup of the new server has been completed. We've moved 3 resource-intensive accounts there already and the load has improved a bit on the Madison server.

    LSWS has been re-installed, but this had no impact on performance. We've opened a ticket at LiteSpeed Tech so they can have a look and make sure that nothing is wrong with the web server.

    We will continue to arrange account transfers to the new server to ease off the load on the Madison server, but this might take 1 or 2 days. In the meantime, we hope that the CloudLinux and LSWS technicians will be able to find something.

    Update 16.10.18 01:00 CEST: The main cause of this issue seems to have been found. We are not entirely sure yet, as we'll need to monitor the server tomorrow during traffic peak hours, but so far the server load has dropped and many websites open up smoothly.

    We will update this page again tomorrow and post more information. Thank you for your patience.

    Update 16.10.18 09:14 CEST: Starting this morning, the performance has dropped again. It appears that during the day, there must be one or more websites that are causing high load. We are trying to identify them and will suspend them once found.

    Update 16.10.18 09:52 CEST: We have found 3 websites that were targeted by WordPress-specific Distributed Denial of Service (DDoS) attacks. These attacks are often only successful if the respective websites are vulnerable and out of date. This is one of the most important reasons why we strongly advise to always keep your websites up-to-date, without exceptions. Unfortunately, many clients seem to neglect this by leaving their websites vulnerable to all these attacks.

    We will keep monitoring the server and mitigate all attacks as much as possible.

    Update 16.10.18 12:25 CEST: Investigations and optimization works are still ongoing. Performance is still affected. We're trying our best to mitigate the issue.

    Update 16.10.18 12:42 CEST: We have suspended an account that was under attack. The load is back to normal. We'll keep monitoring the server to assure that the performance remains unaffected.

    Update 16.10.18 16:02 CEST: Last fixes by the LiteSpeed technicians have been implemented, which have very positive results. The server is fast and stable again since almost 2 hours. We'll keep monitoring it closely, but we believe that the issue is permanently resolved now.

    Update 16.10.18 20:13 CEST: CloudLinux have finally analyzed the issue and suggested to install a beta kernel that addresses this specific issue. The kernel has been installed and the server was rebooted already. We're going to monitor the server and see how it performs with this kernel. We have been advised that the kernel may still have a few minor bugs, but we assume that it can't get any worse than this.

    We will test the kernel for 1 or 2 days and report our results to CloudLinux. If the kernel fixes this issue and runs stable, we will continue to run it. Otherwise, we will wait for the stable kernel release from CloudLinux.

    Update 16.10.18 20:25 CEST: The kernel is causing too many issues already. We are reverting and will reboot the server again.

    Update 16.10.18 20:47 CEST: We have re-installed and booted into the kernel we used between March and October 2018. The server seems to run stable so far and the issues we saw with the testing kernel no longer occur.

    We will stay with this kernel if it runs stable and performance is good. Although it has some security vulnerabilities, the most critical ones are patched by KernelCare and the others are complicated to exploit, so we should be safe.

    We'll keep monitoring the server closely and if nothing unusual happens, updates will follow tomorrow.

    Update 17.10.18 10:35 CEST: The server load was perfectly fine from yesterday evening until this morning, but has increased again starting 9 AM. We will try to arrange further transfers to our new server, for accounts that have a high resource usage.

    Update 18.10.18 12:12 CEST: We are constantly trying to mitigate the performance issue while waiting for CloudLinux to provide a permanent solution. We're sorry that this issue takes so long to resolve, but it is dependent on CloudLinux, who are very slow, unfortunately.

    Update 18.10.18 19:57 CEST: The server is being rebooted to enable kernel dumping, in order to provide CloudLinux more detailed information about the system. Services will be unavailable for 5 minutes.

    Update 18.10.18 20:11 CEST: We've rebooted the server again and generated a core dump, which CloudLinux will investigate. They should now have all necessary information to work on a solution. We hope to have it by the end of the week, but this depends entirely on how fast the CloudLinux team is.

    Update 19.10.18 08:57 CEST: The server runs surprisingly fast and stable since Thursday evening, after CloudLinux have entered the server to investigate and asked for a reboot to generate the kernel dump. They have not made any other changes, as far as we're aware. We'll keep monitoring the server closely and try to mitigate any possible overloads while we receive a decisive answer/solution from CloudLinux.

    Update 22.10.18 21:27 CEST: Today we have transferred the most resource-intensive account from the server "Madison" to the new server "Antero". Except for four short periods of server overload, the performance was smooth the entire day. We will keep monitoring the server closely and, if necessary, transfer two or three more accounts that have slightly higher resources than the rest of the accounts.

    The root of the issue is still pending investigation. We will try to postpone the server reboot and kernel dumps until Wednesday next week, as there were too many outages recently.

    Unless something unusual happens in the meantime, we will update this status item again next week. Any possible performance issues will be mitigated promptly until then.

    Update 30.10.18 12:52 CET: We are going to update to a new kernel that was released this week, as an attempt to address the performance/stability issues. The server will go down for reboot within 5-10 minutes.

    Update 30.10.18 13:11 CET: The kernel has been installed and the server is now being rebooted.

    Update 30.10.18 13:47 CET: The server is up, running the newest kernel. Most websites open up smoothly and the average load has dropped at this moment, but the load is still a bit higher than usual. We cannot determine yet if it will continue to remain stable or increase again.

    We will contact CloudLinux again to resume their investigations and arrange further transfers of accounts that have the highest CPU usage. This should ease off the server even more. Updates will follow once we hear back from CloudLinux.

    Update 01.11.18 12:33 CET: CloudLinux have suggested to install a recently-released "beta" kernel, which fixes various issues and might fix our performance issue as well. However, we prefer not to install "beta" kernels on our production systems because these are usually unstable.

    The server is running smoothly since our last update, except for a few short overloads that we were able to mitigate promptly. Unless the server becomes heavily overloaded again for longer periods, we will wait for the upcoming "stable" kernel to be released by CloudLinux and update to it shortly after release. It often takes 1 or 2 weeks for "beta" kernels to be released to the "stable" channel.

    We also plan to switch to PHP 7.1 (as default version) and enable LiteSpeed Web Cache for all WordPress websites that have no caching plugin installed. This should stabilize the server even more and speed up most websites. An announcement with the exact schedule will be emailed to all clients soon. If the performance bug will get fixed as well, then the server should become even faster than before.

    Update 05.11.18 22:15 CET: A new kernel for CloudLinux 7 has been released, which will hopefully improve performance. We are currently installing it and will reboot the server right after.

    Update 05.11.18 22:28 CET: The server is going down for reboot now.

    Update 05.11.18 22:33 CET: The server has been successfully rebooted with the new kernel. We will monitor the server closely for at least the next 3 days and see if the new kernel brings positive results to the overall performance. Should the performance issues still persist, we will contact CloudLinux again and continue to mitigate the performance issues in the meantime.

    Imunify360 version 3.7 is also expected to be released on Thursday this week. We've been told that this version should use a bit less resources, which would benefit the server's performance.

    Update 06.11.18 07:50 CET: We have detected that the system does not log anything to the "messages" log, which can be quite a serious problem. We will reboot the server again in about 2 hours, as an attempt to fix this. If the system logs still won't work, we'll have to return to the old kernel.

    Update 06.11.18 09:24 CET: The server will now be rebooted.

    Update 06.11.18 09:32 CET: The server has been successfully rebooted and the system log works properly now. Seems to have been just a glitch. We'll continue to monitor the server, as mentioned in the 3rd previous status.

    Update 06.11.18 11:47 CET: The server is currently extremely overloaded. We are trying to mitigate this issue now.

    Update 06.11.18 11:51 CET: The server is now being rebooted.

    Update 06.11.18 11:58 CET: Another reboot is required, as the IP addresses haven't loaded.

    Update 06.11.18 12:07 CET: All IPs have loaded and all services/websites are back online. We're investigating the reason for the earlier overload.

    Update 06.11.18 12:40 CET: The server is running stable since the last reboot. We could not determine the exact cause, as the system was almost frozen earlier, but suspect that Imunify360 might have caused or contributed to the extreme overload.

    Currently, there are multiple issues that cause sporadic overloads:

    • Imunify360 often has medium to high CPU usage. We'll update to the upcoming stable version on Thursday, which is expected to address this issue. Otherwise, if the server will be extremely overloaded again, before or after the update, we'll temporarily uninstall Imunify360.
    • The Dovecot (POP3/IMAP) service is often restarting and causing high CPU usage for short periods of 1-2 minutes. This issue has been reported to cPanel and we're awaiting a solution.
    • Many websites/accounts use misconfigured and/or outdated scripts (e.g. WordPress) that are often loaded with tens of plugins that cause high resource usage. A few simultaneous visitors to such websites are often enough to cause the entire server to slow down; especially when multiple websites such as these get traffic at the same time.
    • WordPress has a process called "admin-ajax.php" that is widely known to overuse CPU in the background. This often happens when WordPress has too many plugins installed. It is often enough for a few users to be logged in to the WP admin area of such websites simultaneously to cause a temporary overload.

    If you use WordPress, you can help us to prevent such issues and reduce the load by updating WordPress and all plugins/themes, removing plugins that are not necessary and installing the LSCache plugin. This also applies to other scripts such as Joomla, Drupal, etc. We highly recommend to keep them up-to-date and use caching.

    Soon we will upgrade to PHP 7.1 and install LSCache for all WordPress sites that have no caching enabled. This should help some websites and the server overall with the performance.

    We're still monitoring the server closely and will try to prevent/mitigate possible overloads.

    Update 06.11.18 13:05 CET: Imunify360 has caused another slight overload and short outage. We're currently uninstalling it. Some services might fail for a few minutes during the uninstall process.

    Update 06.11.18 13:27 CET: Imunify360 is still uninstalling. The web server fails to start until the uninstall completes, unfortunately.

    Update 06.11.18 13:39 CET: The server was unresponsive and is now being rebooted.

    Update 06.11.18 13:54 CET: We have booted into the previous kernel and are now attempting to uninstall Imunify360 again.

    Update 06.11.18 13:59 CET: Imunify360 has been uninstalled. All services are running. We expect no further outages on the short term, as the server is running with the previous kernel now.

    These issues will be reported to CloudLinux and are pending further investigation.

    Update 06.11.18 14:48 CET: The server is running smoothly since the last reboot. Considering that the previous kernel is running now and Imunify360 was uninstalled, the server should continue to run stable from here on. We'll keep monitoring it, of course.

    Update 06.11.18 16:10 CET: The server has overloaded extremely again and is now rebooting.

    Update 06.11.18 17:22 CET: The server is running stable now, with the load lower than before. Reason for the overload and crash this time was that KernelCare was still applying live patches to the old kernel, so basically, we were running the old kernel, but patched up with the code of the new kernel.

    KernelCare has been unloaded and uninstalled now. No further kernel patches are applied and we will refrain from updating the kernel until CloudLinux have a reliable solution.

    Another issue was that the Majestic and Semrush bots were crawling multiple websites at very high rates, causing them to overload the server additionally. We've added ModSecurity rules to block these bots, as they're known to be too aggressive when crawling websites.

    Update 07.11.18 08:55 CET: The server continues to run smoothly. It's more fast and stable than at least in the past 4 weeks. The average server load has dropped to the monthly lowest. It might go higher, as we usually expect more traffic to come in during the day.

    Update 07.11.18 21:30 CET: Performance and stability have been very well today. No outages have occurred. We're still expecting CloudLinux to find the root of the issue and provide a permanent fix. Our effort so far has only mitigated the issue, but not resolve it.

    Today there was only a temporary issue with sites behind CloudFlare, as a technician has accidentally removed the CloudFlare IPs from the firewall whitelist and some of them got temporarily blocked. This has been corrected. We're sorry if your website was affected by this in any way.

    Another issue that we're currently dealing with are a few email accounts that are targeted by massive distributed brute force attacks. Our firewall keeps blocking and mitigating these attacks, but unfortunately, this also puts an additional load on the CPU. We cannot do anything but wait for these attacks to stop.

    We'll continue to monitor the server closely until CloudLinux finally provide a feasible solution. Thank you for your patience and understanding so far! It's greatly appreciated.

    Update 08.11.18 10:15 CET: The server keeps holding up well. The load is fine and the websites are performing smoothly.

    CloudLinux are still suggesting to install a "beta" kernel, but we refuse to do this as long as they cannot guarantee that it addresses this particular issue. We've tried 5 kernels so far and all had even more issues than now. None of them have fixed the core of this issue. Our system admins are still discussing with them, trying to get a feasible solution from their team.

    Update 12.11.18 18:00 CET: The server continues to run quite well, with a few minor exceptions, when some websites receive sudden spikes in traffic, but we've been able to mitigate these promptly. Overall, the server has no longer crashed and the websites open fast enough.

    As announced today, we will upgrade to PHP 7.1 as the system default version and install LiteSpeed Cache on all WordPress websites during the course of this week. This should improve performance and stability even more.

    CloudLinux have still not responded yet. Today we've talked to a senior system administrator who works for multiple hosting companies and he has confirmed that we're not the only ones who are experiencing performance and stability issues with CloudLinux servers. These issues have actually started to appear in the past 5 months, but our servers are only affected by them now because we haven't updated the CloudLinux kernel since May. Only the new kernels are affected.

    We will continue to remind CloudLinux about this issue and try to push it to them until they finally acknowledge it and work on a permanent solution.

    Update 16.11.18 03:45 CET: No major issues have occurred since the last update. The performance continues to uphold and the server and all its services/websites have been permanently online since last week. Tomorrow we'll upgrade to PHP 7.1 as the system-default PHP version, which will hopefully improve performance even more.

    We've received two reports regarding the mail server being slow. In one case, this was actually caused by a wrong password being saved in the email client and causing the client's IP address to get blocked. If you're also experiencing issues with the mail services, kindly contact our technical support department.

    A few minutes ago, we have re-installed Imunify360, as the long-awaited "stable" version 3.7 has been released two days ago. We hope that this version will require less CPU and not affect the server performance. It should help mitigate the distributed brute force attacks against some email accounts better.

    Update 16.11.18 06:10 CET: The server became unresponsive earlier and a reboot was necessary. We've booted into an old kernel this time, which won't be patched by KernelCare. The booted kernel was in use for several months, prior to these issues, and we hope that the system will be more stable with it until CloudLinux finally have a solution.

    We've also upgraded the RAM from 32 GB to 40 GB, as the swap partition was being used and this is not good for performance.

    Update 16.11.18 11:07 CET: The server continues to run smoothly since the last reboot. CloudLinux have been noticed logging into the server earlier, most probably to investigate further, but no response has been received from them yet. We'll keep monitoring the server closely and await for a response/solution from the CloudLinux team.

    Update 17.11.18 12:00 CET: We finally seem to be making some progress. CloudLinux have investigated this issue again, this time a bit more rigorously, and have done some tweaks to the kernel settings (related to the memory management and network connections). They've advised us to switch back to the newest kernel with these settings, which we'll do, but we're trying to arrange a schedule with them so they can do the update by themselves and monitor the server for 1-2 hours.

    Since Saturday is the day with the lowest traffic, according to our statistics, we'd like to schedule this task in 2 weeks from now, as next week is Black Friday and some shops might be running ad campaigns for this event. We'll try not to touch the server next week if there won't be any issues.

    The server continues to run very well so far, but it's running an old kernel and we must switch to the latest kernel sooner rather than later, as newer kernels patch some security vulnerabilities, such as the Spectre/Meltdown vulnerabilities of Intel processors. We hope that the next kernel update, planned in 2 weeks from now, will be successful.

    Update 23.11.18 20:20 CET: The server was able to sustain the traffic and load during Black Friday without any issues. With a few minor exceptions, the server performance overall has improved significantly since last month.

    This week, a new CL7 kernel was released that addresses security, CPU and memory related bugs. We've successfully installed the kernel on some other servers. If there won't be any issues, we'll install it on the Madison server as well. This will, hopefully, resolve all residual issues and finally allow us to mark this case as solved.

    Update 26.11.18 22:50 CET: As reported in a separate incident earlier, the server has suddenly crashed earlier. We've decided on a short term to install the newest CloudLinux kernel update that was recently released. The server is online since 40 minutes, running the newest kernel, without any issues so far. All services and websites are currently online and the performance is optimal.

    A full block scan will be done by R1Soft Server Backup due to the reboot. This will cause an increased load and disk activity, which is normal. The backup should complete by next morning and the load caused by R1Soft should drop gradually within 48 hours at most.

    Right now, the average server load has reached the lowest point in the past 30 days at this hour (lower load value is better). All in all, the server performs really great. It's too early to determine, but this might be a good sign that all residual issues we had in the past 6 weeks might be finally resolved with this kernel update. Tomorrow during the day, when traffic starts to increase, we should be able to make a final assessment regarding this.

    We'll continue to monitor the server closely and react immediately in case any issues will be detected.

    Update 27.11.18 08:30 CET: The server continues to run really well. Stability and performance seem to have been restored with the latest kernel update. We'll keep monitoring the server throughout the day before we can determine for sure whether we can finally mark this incident as resolved.

    Update 28.11.18 17:30 CET: We can finally say with absolute certitude that the root cause of the performance and stability issues has been finally fixed. The new CloudLinux kernel is running smoothly and the server overall is lightning-fast and rock-stable again. :-)

    The average load is stable, within the 1.0 and 8.0 ranges (out of 12.0). All services have been online, without interruption, and the websites are opening even faster than before. The positive part of this issue is that we have optimized the server and websites up to their maximum potential (on the server side). These improvements reflect to the page load time, which for most websites is under 1 second now. We've received a lot of positive feedback lately, which is very relieving after the wave of complaints.

    This week, we'll send an extensive complaint to CloudLinux, with the request to improve their quality assurance and testing process, as these kind of issues should never appear in stable versions that are intended for production use. We will also address the superficial manner in which our case was handled, as our reports that the kernel was the issue and not our server were repeatedly ignored. This should not happen, especially with commercial software.

    We'd like to thank all clients very much for the tremendous patience and understanding. It is utmost appreciated! If you notice any further issues, please don't hesitate to let us know.

  • Date - 14.10.2018 00:00 - 28.11.2018 17:30
  • Last Updated - 28.11.2018 17:48
Scheduled Network Maintenance at myLoc Data Center (Resolved)
  • Priority - Medium
  • Affecting System - myLoc Data Center Network
  • Start Time: Monday, 26 November 2018 05:00 CET
    Estimated End Time: Monday, 26 November 2018 08:10 CET
    Estimated Duration: 10 minutes

    Server / Equipment: Network
    Type of Work: Planned maintenance
    Impact of Work: Possible network outage, packet loss
    Estimated Downtime: approx. 10 minutes

    The myLoc data center has informed us about planned maintenance work on their network. The maintenance will be carried out during the above time-frame.

    During the maintenance, some servers might be unreachable for up to 10 minutes.

    You can follow the status of this maintenance here and/or on the status page of myLoc data center at http://www.myloc-status.de/de/1362/

    We're sorry for any inconvenience this may cause. Thank you for your understanding.

  • Date - 26.11.2018 08:00 - 26.11.2018 08:10
  • Last Updated - 26.11.2018 22:59
Server "Madison" unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Madison
  • The server "Madison" has suddenly crashed and is currently unavailable.

    We have rebooted it now and the services are loading at the moment.

    Please stand by for further information. We're sorry for the inconvenience caused.

    Update 26.11.18 21:31 CET: The server and all services are now back online. We're now investigating the cause of the crash.

    Update 26.11.18 21:47 CET: No relevant information was found in the system logs, unfortunately. We believe that this issue could be related to the old kernel that was in use until now. As an attempt to prevent further crashes, such as this one, and to resolve the residual issues reported in the past weeks, we will install the newest CloudLinux stable kernel that was released today, as it includes some security and bug fixes that are relevant to the past issues. A reboot will be required to apply the update.

    Update 26.11.18 22:12 CET: The server is going down for reboot now. It should be back online within 5 minutes.

    Update 26.11.18 22:35 CET: The server and all websites/services are back online since 20 minutes. The kernel seems to be running smoothly so far. We'll keep monitoring it closely and react in case any issues will be detected. We're sorry for the inconvenience caused and thank you for your understanding.

  • Date - 26.11.2018 21:27 - 26.11.2018 22:35
  • Last Updated - 26.11.2018 22:39
Scheduled Upgrade to PHP 7.1 (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • With the PHP 5.6 and PHP 7.0 branches reaching their end of support time-frame, we will be upgrading all servers and accounts that are still running PHP 5.x and/or PHP 7.0 to PHP 7.1.

    The upgrade to PHP 7.1 as the system default version is scheduled to take place on 17 November 2018. All accounts that are still running PHP 5.x or PHP 7.0 will be switched to PHP 7.1.

    This upgrade is expected to improve the overall performance of our servers and the hosted websites, as PHP 7 is known to have significantly better performance than PHP 5.

    The upgrade from PHP 5.6 to PHP 7 is a major upgrade. There are a few incompatibilities and new features that you or your developers should take into consideration.

    As a rule of thumb, your website/account should be compatible with PHP 7.1 if all your scripts (e.g. WordPress, Joomla, etc.) are up-to-date. Updates should be applied at least once per month. If you haven't updated your scripts since a long time, we strongly advise you to update soonest possible to assure full compatibility and cover security vulnerabilities. If you use WordPress, you can check the compatibility of your plugins/themes with the PHP Compatibility Checker plugin.

    Read more...

    Update 17.11.18 15:07 CET: We will now proceed switching the system and all accounts to PHP 7.1.

    Update 17.11.18 15:33 CET: We have switched the system default version and all accounts to PHP 7.1, except the accounts that have PHP 7.2 already enabled.

    PLEASE NOTE: If your website returns a 500 Internal Server Error, it is most definitely related to the PHP version 7.1 being incompatible with your scripts. The exact reason is usually logged in the "error_log" file under the folder where your script is located (e.g. the "public_html" folder). You should update your scripts to assure compatibility with PHP 7.1 or higher.

    You can switch back to PHP 5.6 or 7.0 if you need, in cPanel under "Select PHP Version".

    Please note that we provide limited support for accounts running PHP 7.0 or lower.

    Update 17.11.18 16:30 CET: To track PHP errors following the switch to PHP 7.1 better, all "error_log" files are being deleted. Thus, the logs will only contain errors following this change.

    Update 19.11.18 09:07 CET: Some accounts using CloudLinux PHP Selector (under cPanel » Select PHP Version) have not been switched to PHP 7.1, unfortunately. We have switched these accounts today.

  • Date - 17.11.2018 00:00 - 17.11.2018 23:59
  • Last Updated - 19.11.2018 09:09
Server unresponsive - Reboot initiated (Resolved)
  • Priority - Critical
  • Affecting Server - Madison
  • The server "Madison" has become unresponsive and a reboot was initiated. The server should be back online within 5 minutes.

    We're sorry for the inconvenience.

    Update 16.11.18 06:10 CET: The server and all services/websites are back online since a few minutes. We've booted into an old kernel due to the recent CPU performance and stability issues we had lately. The booted kernel was in use for several months without issues and we hope that the system will be more stable with it.

  • Date - 16.11.2018 05:51 - 16.11.2018 06:13
  • Last Updated - 16.11.2018 06:13
Scheduled Installation of LiteSpeed Cache for WordPress (Resolved)
  • Priority - Low
  • Affecting Server - Madison
  • As part of our on-going efforts to mitigate the recent performance and stability issues on the “Madison” server, caused by CloudLinux and some resource-intensive accounts, we have been advised and decided to enable LiteSpeed Cache for all hosted WordPress websites that have no other caching plugin installed/enabled.

    Developed by LiteSpeed Technologies, the LiteSpeed Cache plugins are designed to hyper-charge popular web apps with little to no configuration required. It can turbo-charge popular web apps with minimal fuss, handle traffic spikes with ease, and precisely manage cache with powerful Smart Purge technology.

    LiteSpeed Cache was previously only available for Plus+ hosting plans, but due to the issues we have experienced lately, we have decided to make it available for all clients at no additional cost.

    The LiteSpeed Cache plugin for WordPress will be automatically installed and enabled for all compatible WordPress installations on the server on Wednesday, the 14th of November 2018.

    The LiteSpeed Cache plugin will not be installed on incompatible WordPress installations, flagged installations or installations that already have a caching plugin enabled.

    Read more...

    Update 14.11.18 13:45 CET: Currently scanning for all WordPress Installations located under a /public_html/ directory.

    Update 14.11.18 14:00 CET: The scan has completed. Currently attempting to install/enable LiteSpeed Cache for all compatible installations that were found.

    During the scan, 51 WordPress installations were found to be running versions lower than 4.0. Some were even running WordPress v2, which is over 13 years old!

    It is quite worrying to see that there are still many WordPress sites that have not been updated since years or even over a decade. There are hundreds of known vulnerabilities affecting WordPress 4.x that can be easily exploited by any remote attacker. In some cases, these attacks can affect not only the vulnerable website, but our entire server and all hosted websites. If you or your clients run WordPress and you haven't updated it and its themes/plugins, we strongly urge you to do this soonest possible. For your safety, for the safety of our servers and for the safety of all other clients. Leaving a WordPress installation unpatched for years is simply irresponsible.

    Update 14.11.18 14:30 CET: LiteSpeed Cache has been successfully installed/enabled for all compatible WordPress installations. If your WordPress installation was flagged, outdated or had a different caching plugin in use LSCache has not been installed for it. You can manually install it after upgrading WordPress and/or after disabling any existing caching plugin.

    If your website runs a different platform than WordPress, LSCache has plugins for various different platforms available here: https://www.litespeedtech.com/products/cache-plugins

    This task is now completed. It should help the respective websites load faster now and hopefully it will also improve with the overall performance/stability of the "Madison" server.

    If you notice any issues that might be related to LSCache, please contact our technical support department. Thank you!

  • Date - 14.11.2018 00:00 - 14.11.2018 23:59
  • Last Updated - 15.11.2018 00:02
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Antero
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on the server "Antero".

    The kernel update requires a server reboot, which will result an outage of approximately 5 minutes. All services on the server will be unavailable during the reboot.

    We're sorry for any inconvenience this may cause and thank you for your understanding.

    Update 05.11.18 22:25 CET: The server is going down for reboot now.

    Update 05.11.18 22:27 CET: The server and all services are back online. The maintenance has been successfully completed.

  • Date - 05.11.2018 22:18
  • Last Updated - 05.11.2018 22:27
Server Reboot / Kernel Update to address performance issues (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • We have been advised by CloudLinux to install a new kernel to address the recent performance issues.

    The kernel is now being installed, after which we will reboot the server.

    All services will be unavailable for up to 10 minutes. Thank you for your understanding.

    Update 16.10.18 19:40 CEST: The server is going down for reboot now.

    Update 16.10.18 19:45 CEST: The kernel has been installed and the server is back online. We're going to monitor the server and see how it performs with this kernel. We have been advised that the kernel may still have a few minor bugs.

    Update 16.10.18 20:25 CEST: The kernel is causing too many issues already. We are reverting and will reboot the server again.

    Update 16.10.18 20:47 CEST: We have reverted to the kernel we used between March and October 2018. The server seems to run stable so far and the issues we saw with the testing kernel no longer occur.

  • Date - 16.10.2018 19:34
  • Last Updated - 16.10.2018 20:46
Scheduled Maintenance - Software Updates and Server Reboot (Resolved)
  • Priority - High
  • Affecting Other - Hypervisor "Avalon" and all related VMs, Hypervisor "Manhattan" and all related
  • Type of Work: Software Maintenance
    Impact of Work: Downtime due to server reboot
    Estimated Downtime: minimum 5 minutes, up to 20 minutes (per server)

    Within the above time-frame, the mentioned servers/hypervisors will be rebooted to apply important software updates. The updates include bug and security-related fixes that ensure the stability and safety of our servers.

    All related services/servers will be shortly unavailable during the reboot. Available information will be posted here during the maintenance.

    Thank you in advance for your patience and understanding. If you have any questions, please contact us.

    Update 12.10.18 19:53 CEST: The maintenance process has been started.

    Update 12.10.18 20:03 CEST: The VPS node "Avalon" and all hosted VMs are going down for reboot now.

    Update 12.10.18 20:26 CEST: The VPS node "Avalon" requires another reboot because Xen has not updated correctly. We expect an extended downtime of about 15 minutes. We're sorry for the inconvenience.

    Update 12.10.18 20:39 CEST: The VPS node "Avalon" has been successfully updated and rebooted. All hosted VMs are now back online.

    Update 12.10.18 20:57 CEST: The VPS node "Manhattan" and all hosted VMs are going down for reboot now.

    Update 12.10.18 21:04 CEST: The VPS node "Manhattan" has been successfully updated and rebooted. All hosted VMs are now back online.

    Update 12.10.18 21:05 CEST: The server "Madison" is going down for reboot now.

    Update 12.10.18 21:07 CEST: All servers have been successfully updated and are now back online.

    Update 13.10.18 10:22 CEST: Unfortunately, we have detected issues with the newest kernel and must revert back to the previous kernel. The VPS node "Manhattan" and all hosted VMs are going down for reboot now.

    Update 13.10.18 10:32 CEST: The VPS node has rebooted, but still in the new kernel. Another reboot will be required.

    Update 13.10.18 10:43 CEST: Another attempt to boot the old kernel.

    Update 13.10.18 10:55 CEST: The old kernel has been booted. We're going to monitor this issue and announce if there are residual issues.

    Update 13.10.18 11:09 CEST: The VMs suddenly fail to boot. We're investigating now.

    Update 13.10.18 12:04 CEST: The file system of the Xen node boots in read-only mode, which disallows the VMs to boot up and the system to function correctly. Our system admins are currently trying to fix the file-system.

    Update 13.10.18 12:48 CEST: The Xen node still won't boot in read-write mode.

    The volumes of all VMs are on the system, readable and writable, but because of the Xen file-system being in read-only mode, all VMs are unable to boot.

    As the last resort option, if no other solution can be found, we will try to reinstall CentOS and Xen on the VPS node only, while retaining the volume group of the VMs.

    Update 13.10.18 12:54 CEST: We might have found a possible solution, but it will take a little longer to implement it.

    Update 13.10.18 13:10 CEST: CentOS and the Xen hypervisor have finally booted in read-write mode. We are booting the VMs now.

    Update 13.10.18 13:15 CEST: All VMs have booted up and are back online, but there are still some residual issues that must be resolved. We don't expect another downtime of the VMs, except for the "Madison" server, which is still having performance issues that are being investigated.

    More information will follow soon.

    Update 13.10.18 15:27 CEST: We seem to have found the root of the issue, but we must reboot the servers one last time. An outage of 5 to 10 minutes will be caused.

    Update 13.10.18 16:50 CEST: The Xen hypervisor and all VMs are back online since over an hour and are running stable now. We consider this issue permanently resolved.

    The main issues were Xen writing a large log file that has filled up the space and forced the file system to become read-only, and the latest Xen 4.8 update, which has assigned an unequal CPU vCore distribution. While some vCores were shared with 2 or more VMs, others were unused. To resolve this, we have manually assigned the CPU vCores to each VPS and they are all dedicated now, except for the self-managed VPSs.

    We're extremely sorry for the inconvenience and thank you very much for your patience.

    To make things right for this long outage, we will prepare something special for all affected clients in the near future. Of course, we will also learn from this and apply software updates less frequently (unless they contain security updates). Maintenance updates (without security patches) will only be applied once they are stable enough for production environment. Nowadays, the label "stable" in software can still come with surprises, unfortunately.

  • Date - 12.10.2018 20:00 - 13.10.2018 16:56
  • Last Updated - 13.10.2018 17:05
Performance Issues on Server "Madison" (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • Performance issues have been detected on the shared hosting server "Madison" following the recent kernel update.

    We are already investigating this issue at highest priority and will post updates here once available.

    We apologize for the inconvenience.

    Update 12.10.18 21:38 CEST: The performance has stabilized. We're still working on this issue.

    Update 12.10.18 22:52 CEST: We've applied a few tweaks, but the performance is still not well enough. We're currently upgrading MariaDB from 10.1 to 10.2, as it includes some performance improvements as well.

    Update 12.10.18 23:29 CEST: The upgrade to MariaDB 10.2 has been completed. The performance is still slow and we're investigating.

    Update 12.10.18 23:50 CEST: We must reboot the server again to apply a configuration update. It should be back online within 5 to 10 minutes.

    Update 13.10.18 00:26 CEST: We're rebooting the server to the old kernel to see if this solves the performance issues.

    Update 13.10.18 03:17 CEST: The performance has improved a bit, but we are still seeing websites, mostly those that don't use caching, load much slower than usual. The investigation continues.

    Update 13.10.18 06:02 CEST: We've opened a ticket CloudLinux (OS vendor), as we believe that this issue might be related to the operating system.

    Update 13.10.18 06:55 CEST: Following the recommendation of CloudLinux, we will adjust the LVE limits (resource limits), as there are some users that use an excessive amount of resources.

    Update 13.10.18 07:15 CEST: Although there are some resource-intensive websites/accounts, the performance is still low, even after reducing the resource limits. The system admin and the CloudLinux technicians are still investigating. We're extremely sorry for the inconvenience.

    Update 13.10.18 09:04 CEST: The investigations are ongoing on both sides. We kindly ask for more patience.

    Update 13.10.18 10:18 CEST: The node will be rebooted into the previous kernel, prior to the performance issues. All services will be unavailable for approx. 10 minutes.

    Update 13.10.18 10:32 CEST: The VPS node has rebooted, but still in the new kernel. Another reboot will be required.

    Update 13.10.18 10:43 CEST: Another attempt to boot the old kernel.

    Update 13.10.18 13:28 CEST: The Madison server is back online after the Xen hypervisor has failed to boot, but the performance issues persist, which are being investigated now.

    Update 13.10.18 13:45 CEST: The Madison server is being rebooted.

    Update 13.10.18 13:57 CEST: An older kernel was booted. The performance issue still persists.

    The root cause is currently unknown. The effect of it is that the server performance is very low, although the total CPU usage is below 20%. Other resources such as disk I/O and memory are also used less than 50%. Basically the server has plenty of free resources, but it fails to utilize them.

    The issue is still being investigated, of course.

    Update 13.10.18 16:57 CEST: The Madison VPS is back online since over an hour is running more stable now.

    The reason for the total outage was Xen/CentOS writing a very large log file that has quickly filled up the space and forced the file system to become read-only. The performance issues were caused by the latest Xen 4.8 update, which has assigned an unequal CPU vCore distribution. While some vCores were shared with 2 or more VMs, others were unused and those used couldn't reach their maximum potential on the Madison VPS. To resolve this, we have manually assigned the CPU vCores to each VPS and they are all dedicated now, except for the self-managed VPSs.

    There are still periodic performance issues, but these are mostly caused by accounts that overuse resources and/or receive too much traffic for a shared hosting environment. We will monitor these accounts and contact the respective clients for upgrade or optimization proposals in the following days/weeks.

    Furthermore, we will wait 1 or 2 days for enough MySQL/MariaDB stats to be collected so we can run a script that optimizes the MySQL/MariaDB server for better performance.

    We're extremely sorry for the inconvenience and thank you very much for your patience.

    To make things right for this long outage, we will prepare something special for all affected clients in the near future. Of course, we will also apply software updates less frequently (unless they contain security updates). Maintenance updates (without security patches) will only be applied once they are stable enough for production environment. Nowadays, the label "stable" in software can still come with surprises, unfortunately.

  • Date - 12.10.2018 21:15 - 13.10.2018 17:04
  • Last Updated - 13.10.2018 17:04
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center
  • A partial network outage at myLoc data center has been detected, which affects a small number of servers.

    An emergency ticket has been opened. Updates will be posted as soon as we have more information.

    We're sorry for any inconvenience this outage may cause.

    Update 11.10.18 19:15 CEST: There is currently approximately 75% packet loss. We're still waiting for an update from the DC.

    Update 11.10.18 19:35 CEST: 0% packet loss since 15 minutes and all servers appear to be back online. However, the DC engineers have confirmed the network issue and they are still working on a permanent solution at the moment. Further outages may occur.

    Update 11.10.18 20:10 CEST: No further outages have been detected since almost 1 hour. We will mark this incident as resolved.

  • Date - 11.10.2018 19:04 - 11.10.2018 20:10
  • Last Updated - 13.10.2018 16:48
Emergency Network Maintenance - Core Backbone Router Update at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - Core Backbone Router at myLoc DC
  • Start Time: 23 June 2018 01:00 CEST (Saturday)
    Estimated End Time: 23 June 2018 03:00 CEST (Saturday)
    Duration: 2 Hours

    Equipment: Core Backbone Router
    Type of Work: Software Upgrade and Router Configuration
    Impact of Work: Network Outage, Packet Loss
    Estimated Downtime: Approximately 30 minutes

    In the night from Thursday 21.06.2018 to Friday 22.06.2018, part of the network infrastructure of myLoc data center was affected by a temporary outage. The outage was identified immediately, and the network was restored through existing redundancies.

    However, to permanently repair and restore the normal state of the network, it is necessary to install a critical software update, which will be provided today during the day, on the affected routing platforms. The on-site network technicians will therefore carry out an emergency maintenance in the night of Friday 22.06.2018 to Saturday 23.06.2018 between 01:00 CEST to 03:00 CEST.

    To ensure the effectiveness of the update, a parallel reload of the complete platforms is necessary. This reload is expected to lead to an outage of the network connection of up to 30 minutes.

    Unfortunately, this emergency maintenance is unavoidable and largely serves to ensure and guarantee the long-term stable operation of the network.

    You can follow this maintenance on the status page of myLoc data center at http://www.myloc-status.de/en/. We will post updates, when available, on our Network Status page as well.

    Please inform your customers accordingly, if necessary.

    We kindly ask for your understanding and thank you in advance for your patience. As always, you can contact us if you have any questions or concerns.

  • Date - 23.06.2018 01:00 - 23.06.2018 05:04
  • Last Updated - 23.06.2018 07:21
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on the VPS node "Avalon".

    Among the updates, there are critical security updates and important bug-fixes that are recommended to be applied.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 15 minutes for the VPS node and all hosted VMs.

    We're sorry for any inconvenience this may cause.

    Update 09.06.18 21:22 CEST: The update process has been initiated.

    Update 09.06.18 21:37 CEST: Updates successfully installed. The VPS node and all hosted VMs are going down for reboot now.

    Update 09.06.18 21:51 CEST: The VPS node and all VMs have been successfully rebooted and are back online. The maintenance is hereby completed.

    Thank you for your patience and understanding.

  • Date - 09.06.2018 21:30 - 09.06.2018 21:52
  • Last Updated - 09.06.2018 21:52
Scheduled Maintenance - Hardware Replacement and Software Updates (Resolved)
  • Priority - High
  • Affecting System - Hypervisor "Manhattan", Shared Hosting Server "Madison"
  • Start Time: Friday, 01 June 2018 20:30 CEST
    Estimated End Time: Friday, 01 June 2018 21:15 CEST
    Estimated Duration: up to 45 minutes

    Server / Equipment: Shared Hosting Server "Madison", Hypervisor "Manhattan" and all related VMs
    Type of Work: Replacement of HPE Smart Storage Battery
    Impact of Work: Downtime due to hardware replacement
    Estimated Downtime: minimum 15 minutes, up to 45 minutes

    Within the above time-frame, the server will be powered off for hardware replacement. The HPE Smart Storage Battery has been detected as failed. This component is a centralized battery backup unit that is required to prevent loss of data in case of a power outage. It must be replaced to assure the safety and integrity of the data.

    The hardware component will be replaced by an on-site data center technician. All available updates will be posted here during the maintenance.

    We will also use this opportunity to install all available operating system updates (including kernel), which include important bugfixes and security patches. These updates require the server to be rebooted.

    Thank you in advance for your patience and understanding. If you have any questions, please contact us.

    Update 01.06.18 20:27 CEST: The server will be shutdown for maintenance now. An on-site data center engineer will begin with the hardware replacement immediately after the shutdown.

    Update 01.06.18 20:33 CEST: The server has been shutdown. The start of the hardware replacement has been confirmed by the data center now.

    Update 01.06.18 20:43 CEST: The HPE Smart Storage Battery has been successfully replaced. We will begin with the kernel upgrades once the server boots up and reboot the server again.

    Update 01.06.18 20:49 CEST: The OS updates on the node are now in progress.

    Update 01.06.18 20:57 CEST: The updates have been successfully installed. The VPS node will now be rebooted.

    Update 01.06.18 21:01 CEST: The VPS node has been updated and rebooted. We will now begin installing all available updates on the managed/shared servers.

    Update 01.06.18 21:20 CEST: The maintenance has been successfully completed. All servers are now up-to-date and the HPE Smart Storage Battery is enabled again.

    Thank you again for your patience, it is greatly appreciated!

  • Date - 01.06.2018 20:30 - 01.06.2018 21:21
  • Last Updated - 01.06.2018 21:21
Scheduled Network Maintenance at myLoc Data Center (Resolved)
  • Priority - Medium
  • Affecting System - myLoc Data Center Network
  • Start Time: 07 May 2018 05:00 CEST (Monday)
    Estimated End Time: 07 May 2018 06:00 CEST (Monday)
    Estimated Duration: 1 hours

    Server / Equipment: Network
    Type of Work: Planned maintenance
    Impact of Work: Short network outage, packet loss
    Estimated Downtime: approx. 2 minutes

    The myLoc data center has informed us about maintenance work on the network, in the area where our servers are located. The maintenance will be carried out during the above time-frame.

    The purpose of this maintenance is to assure the reliability of the network and a stable connection for our servers. During the maintenance, some servers may be unreachable for about 2 minutes.

    You can follow the status of this maintenance here and/or on the status page of myLoc data center at http://webtropia-status.de/de/1251/

    We're sorry for any inconvenience this may cause. Thank you for your understanding.

  • Date - 07.05.2018 05:00 - 07.05.2018 06:00
  • Last Updated - 12.05.2018 23:13
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting System - myLoc Data Center
  • We are currently aware of a network outage at myLoc data center that affects a small number of IP addresses in the area where some of our servers are located.

    An emergency ticket has been opened at myLoc data center.

    We'll post updates as soon as we have more information. We're sorry for any inconvenience this outage may cause.

    Update 26.04.18 17:08 CEST: The affected IPs are back online. We haven't received a response from the myLoc data center yet to confirm if the issue has been permanently resolved. Should further outages occur, we will update this page.

    Update 26.04.18 17:15 CEST: The network issue has been confirmed and permanently fixed, according to myLoc data center. No further outages are expected at the moment.

    Thank you for your patience and understanding.

  • Date - 26.04.2018 16:58 - 26.04.2018 17:15
  • Last Updated - 26.04.2018 17:15
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center
  • We are currently aware of a network issue at myLoc data center that affects the area where some of our servers are located.

    This issue is currently being investigated.

    Update 04.04.18 13:49 CEST: An emergency ticket has been opened at myLoc data center since no information is currently available on their status page. We'll post updates as soon as we have more information.

    Update 04.04.18 13:56 CEST: The servers are currently back online, but we have yet to receive an answer from the data center. Further outage may possibly follow.

    Update 04.04.18 14:09 CEST: The network issue has been confirmed by the data center. Some servers are currently offline again. The network engineers at the data center are already working on resolving the issue.

    Update 04.04.18 14:25 CEST: We've been informed by the data center that the issue has been permanently resolved. All servers are back online.

    We're sorry for the inconvenience caused by this incident.

  • Date - 04.04.2018 13:44 - 04.04.2018 14:25
  • Last Updated - 04.04.2018 15:13
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - myLoc Data Center
  • A partial network outage has been reported by myLoc data center, affecting some of their areas.

    None of our servers are affected by this outage so far, but we're posting this information as a precaution, in case the network outage will extend to other parts of the data center.

    We'll update this case accordingly if we notice any outage. In the meantime, you can follow the outage on the myLoc status page at http://www.myloc-status.de/en/1228/ 

    Update 04.04.18 11:39 CEST: The root of the issue has been identified and is now being fixed. All our servers remain online; we're not affected by this outage so far.

    Update 27.02.18 11:56 CET: The network issue has been resolved. No downtime was caused to our servers.

  • Date - 04.04.2018 11:30 - 04.04.2018 11:50
  • Last Updated - 04.04.2018 11:56
Urgent Network Maintenance - Router Reload at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting System - Router at myLoc Data Center
  • Start Time: Wednesday, 28 March 2018 04:00 CEST
    Estimated End Time: Wednesday, 28 March 2018 05:00 CEST
    Estimated Duration: 1 Hour

    Equipment: Router
    Type of Work: Router reload
    Impact of Work: Network outage, packet loss
    Estimated Downtime: Approximately 10 minutes

    During the above window, the networking team of the myLoc data center will be performing an urgent router reload in the area where our servers are located. The reload is necessary due to an acute software problem.

    A short period of downtime and high latency lasting around 10 minutes is expected. The networking team will strive to keep the limitations as low as possible, but due to the extensive measures, unfortunately, it is not possible to carry out the maintenance without any interruption.

    You can follow this maintenance on the status page of myLoc data center at http://www.myloc-status.de/en/1222/. We will post updates, when available, on this page.

    Please don't hesitate to contact us if you have any questions or concerns. You may do so by opening a ticket or by replying to this email.

    We're sorry for any inconvenience this maintenance work may cause and appreciate for your understanding.

    Update 28.03.18 04:00 CEST: The maintenance is now in progress.

    Update 28.03.18 04:58 CEST: The maintenance has been completed. Our monitoring system has registered an outage of less than 5 minutes.

  • Date - 28.03.2018 04:00 - 28.03.2018 05:00
  • Last Updated - 28.03.2018 05:00
Scheduled Maintenance and Server Reboot - VPS Hypervisor "Avalon" (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • Start Time: 12 March 2018 11:45 CET (Monday)
    Estimated End Time: 12 March 2018 12:30 CET (Monday)
    Estimated Duration: 45 minutes

    Server / Equipment: VPS hypervisor "Avalon" and all related VMs
    Type of Work: Maintenance, Software Upgrade
    Impact of Work: Outage due to required server reboot
    Estimated Downtime: minimum 10 minutes, up to 30 minutes

    During the above time-frame, we will install all available operating system and kernel updates on the VPS node "Avalon". These maintenance tasks are necessary to maintain stability and security of our servers, as the pending updates cover some important vulnerabilities and bug fixes. During this timeframe, the VPS hypervisor and all hosted VMs will be rebooted. This will result an outage of approximately 10 minutes, under normal circumstances.

    Thank you in advance for your understanding. If you have any questions, please contact us.

    Update 12.03.18 12:03 CET: The VPS hypervisor and all hosted VMs are going down for reboot now.

    Update 12.03.18 12:13 CET: The maintenance has been successfully completed. All VMs and related services are back online.

    Thank you for your patience.

  • Date - 12.03.2018 11:45 - 12.03.2018 12:30
  • Last Updated - 12.03.2018 12:13
Planned Network Maintenance - Core Backbone Router Update at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting System - Core Backbone Router at myLoc DC
  • Start Time: 01 March 2018 01:00 CET (Thursday)
    Estimated End Time: 01 March 2018 05:00 CET (Thursday)
    Duration: 4 Hours

    Equipment: Core Backbone Router
    Type of Work: Software Upgrade and Router Configuration
    Impact of Work: Network Outage, Packet Loss
    Estimated Downtime: Approximately 15 minutes

    During the above window, the networking team of the myLoc data center, where our servers are located, will be performing software updates and configuration changes of their core backbone router.

    The data center expects the impact of this change to be minimal; however, you may experience some short periods of downtime and latency lasting around 15 minutes while the core backbone router is updated. The networking team will strive to keep the limitations as low as possible, but due to the extensive measures, unfortunately, it is not possible to carry out maintenance without any interruption.

    Part of this maintenance includes the installation of security updates, which are required for the safety of the network.

    Please don't hesitate to contact support if you have any questions or concerns. You may do so by opening a ticket or by replying to this email.

    We're sorry for any inconvenience this maintenance work may cause and appreciate for your understanding.

    You can follow this maintenance on the status page of myLoc data center at http://www.myloc-status.de/en/1199/ 

    Update 01.03.18 06:13 CET: The maintenance has been successfully completed. No outages have been detected by our monitoring system.

  • Date - 01.03.2018 01:00 - 01.03.2018 05:00
  • Last Updated - 01.03.2018 05:40
Scheduled Network Maintenance at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - myLoc Data Center Network
  • Start Time: 09 January 2018 04:00 CET (Thursday)
    Estimated End Time: 09 January 2018 06:00 CET (Thursday)
    Estimated Duration: 2 hours

    Server / Equipment: Network
    Type of Work: Routine maintenance
    Impact of Work: Short network outages, packet loss
    Estimated Downtime: approx. 15 minutes

    The myLoc data center has informed us about maintenance work on the network in the area where our servers are located. The maintenance will be carried out during the above time-frame.

    The purpose of this maintenance is to assure the reliability of the network and a stable connection for our servers. During the maintenance, central network components will be exchanged, which will cause an outage for each server over a few minutes.

    You can follow the status of this maintenance here or on the status page of myLoc data center at http://www.myloc-status.de/en/1145/ 

    We're sorry for any inconvenience this may cause. Thank you for your understanding.

    Update 08.01.18 22:58 CET: The maintenance has been postponed. A new schedule will follow soon.

  • Date - 09.01.2018 04:00 - 09.01.2018 06:00
  • Last Updated - 27.02.2018 22:32
Partial Network Issue at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • A network issue has been detected in the area where the server "Madison" is located, causing so far a downtime of approx. 3 minutes.

    The server and all websites are currently online, but there are still some inconsistencies in the network routing. Further outages may possibly follow.

    The data center has been notified and they've confirmed that the issue is already being analysed by their on-site network engineers.

    Updates will be posted here as soon as we gather more information. We're sorry for any inconvenience this may cause.

    Update 27.02.18 11:10 CET: The network inconsistencies seem to have been resolved. No further outages were detected. We're awaiting a confirmation from the data center.

    Update 27.02.18 14:30 CET: The network issue should be permanently resolved now.

  • Date - 27.02.2018 10:20 - 27.02.2018 11:10
  • Last Updated - 27.02.2018 13:31
Network Issue at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Server - Avalon
  • We are aware of a network issue at myLoc data center that affects the VPS node "Avalon" and all hosted VMs.

    Our monitoring systems have detected a packet loss of over 50%.

    myLoc data center has been notified and we are awaiting more information from them. Updates will be posted here as soon as available.

    We're sorry for any inconvenience caused.

    Update 17.02.18 16:37 CET: The packet loss has no longer been detected since several minutes. We're still awaiting a response from the data center.

    Update 17.02.18 17:00 CET: The data center has confirmed the network issue and they are already working on a permanent solution.

    Update 17.02.18 17:20 CET: According to the data center, the network issue has been resolved. All servers are online and no further packet loss was detected.

  • Date - 17.02.2018 19:54 - 17.02.2018 17:20
  • Last Updated - 17.02.2018 18:54
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center Network
  • We have been informed of a partial network outage at myLoc data center.

    Currently, we have noticed 10% to 50% packet loss on some of our servers, but all servers are online at the time of this writing.

    The myLoc on-site engineers are already working on fixing the problem. We will post updates here when available and you can follow the status on the myLoc Network Status page.

    We're sorry for any inconvenience this may cause.

    Update 24.01.18 10:31 CET: The network issue is with the Telekom carrier. The PGP Session has been shutdown. More information will be posted once available.

    Update 24.01.18 10:31 CET: The network issue has been resolved. All servers are online and there is no longer any packet loss.

  • Date - 24.01.2018 10:27 - 24.01.2018 10:46
  • Last Updated - 24.01.2018 10:09
Server Unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • We are aware of a network issue affecting the server Madison.

    The issue is already being investigated. We will post updates soonest possible.

    Update 06.01.18 00:33 CET: The network issue has been identified at the node level and fixed. The server Madison and all services are back online.

    We're sorry for the inconvenience this may have caused.

  • Date - 06.01.2018 00:28 - 06.01.2018 00:39
  • Last Updated - 05.01.2018 23:40
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting System - myLoc Data Center
  • We have been alerted of a partial network outage at the myLoc data center, which currently affects only one of our servers.

    The data center is aware of this outage and are already working on resolving the network issue soonest possible.

    Updates will be posted here and on the myLoc status page at http://www.myloc-status.de/en/1139/ 

    Should the outage affect other servers of ours, we will let you know. We're sorry for the inconvenience caused by this outage.

    Update 19.12.17 00:58 CET: The unreachable server is now back online and all other servers remained online. The data center is still having issues with their DNS servers, but since we use external DNS resolvers by default, this issue doesn't affect any of our servers.

    This issue will be marked as resolved for now. If we notice any further issues, we will re-open it.

  • Date - 19.12.2017 00:31 - 19.12.2017 00:57
  • Last Updated - 19.12.2017 00:09
Damaged Power Transformers at myLoc DUS1 and DUS3 (Resolved)
  • Priority - High
  • Affecting Other - Power Transformers / myLoc DUS1 and DUS3
  • Two transformers of Stadtwerke Dusseldorf (energy supplier) on the myLoc campus are currently experiencing a disruption. The data centers DUS1 and DUS3, where some of our servers are located, are affected by this.

    The switch to the emergency power system worked without any problems and the affected data centers are still supplied with power. The staff of the municipal utilities are already on site and work on the recovery of the transformers. As soon as the malfunction has been corrected or there is an updated status, we will inform you about it.

    You can also follow the status on the official myLoc status website at http://www.myloc-status.de/en/1105/

    We're sorry for any possible inconvenience this disturbance may possibly cause.

    Update 22.11.17 15:09 CET: The transformer powering myLoc DUS3 has been restored. The transformer powering DUS1 is still being repaired.

    Update 22.11.17 15:15 CET: The transformer powering myLoc DUS1 has also been restored. The power systems are fully operational again.

  • Date - 22.11.2017 14:39 - 22.11.2017 15:16
  • Last Updated - 22.11.2017 14:17
Scheduled Network Maintenance at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • Start Time: 02 November 2017 23:00 CET (Thursday)
    Estimated End Time: 03 November 2017 01:00 CET (Friday)
    Estimated Duration: 2 hours

    Server / Equipment: Network
    Type of Work: Routine maintenance
    Impact of Work: Short network outages, packet loss
    Estimated Downtime: up to 10 minutes

    During the above time-frame, the network engineers at myLoc data center will carry out maintenance work on the network connection within the area where our VPS hypervisor "Avalon" is located.

    A network interruption of maximum 10 minutes is expected. The maintenance work is necessary in order to ensure a stable connection for our servers.

    We will post all available updates and announce the completion of the maintenance here.

    We're sorry for any inconvenience this maintenance may cause and appreciate your understanding.

    Update 02.11.17 23:09 CET: The maintenance is currently in progress. Some IP addresses and VMs are currently unavailable, but should be back online soon.

    Update 02.11.17 23:17 CET: The respective IPs and VMs are currently back online. The maintenance is still ongoing.

    Update 02.11.17 23:25 CET: The maintenance has been successfully completed.

    Thank you for your patience.

  • Date - 02.11.2017 23:00 - 03.11.2017 01:00
  • Last Updated - 02.11.2017 22:28
Scheduled Maintenance of Power Supply at myLoc - VPS Hypervisor "Avalon" (Resolved)
  • Priority - Medium
  • Affecting Server - Avalon
  • Start Time: 18 October 2017 00:00 CEST (Tuesday)
    Estimated End Time: 18 October 2017 01:00 CEST (Wednesday)
    Estimated Duration: 1 hour

    Affected Server / Equipment: VPS Hypervisor "Avalon"
    Type of Work: Scheduled maintenance of the power supply at myLoc data center
    Impact of Work: Power outage, server must be shutdown during maintenance
    Estimated Downtime: minimum 15 minutes, up to 60 minutes

    During the above time-frame, the data center has scheduled a required maintenance of the power supply within the segment where our VPS hypervisor "Avalon" is located. All servers within the respective segment must be shutdown while the data center performs the maintenance, as a short power loss cannot be avoided.

    We have provided the data center access to the server in order to shutdown the server right when the maintenance starts to shorten the downtime as much as possible. The maintenance is estimated to take approximately 5 to 10 minutes. Another 5 minutes are required for the server to boot up.

    You can follow the status of this maintenance here. All available updates and the completion of the maintenance will be posted there.

    We're sorry for any inconvenience the maintenance may cause and kindly ask for your understanding.

    Update 18.10.17 00:00 CEST: The maintenance has been started. The hypervisor and all hosted VMs are shutting down.

    Update 18.10.17 00:25 CEST: The maintenance is still ongoing.

    Update 18.10.17 00:35 CEST: The maintenance has been completed. The server and all VMs are back online since ~5 minutes.

    Thank you for your patience.

  • Date - 18.10.2017 00:00 - 18.10.2017 01:00
  • Last Updated - 17.10.2017 22:35
Scheduled Hardware Replacement (HPE Smart Storage Battery) (Resolved)
  • Priority - High
  • Affecting System - Hypervisor "Manhattan", Shared Hosting Server "Madison"
  • Start Time: 28 September 2017 20:00 CEST (Thursday)
    Estimated End Time: 28 September 2017 20:45 CEST (Thursday)
    Estimated Duration: up to 45 minutes

    Server / Equipment: Shared Hosting Server "Madison", Hypervisor "Manhattan" and all related VMs
    Type of Work: Replacement of HPE Smart Storage Battery
    Impact of Work: Downtime due to hardware replacement
    Estimated Downtime: minimum 15 minutes, up to 45 minutes

    Within the above time-frame, the server will be powered off for hardware replacement. The HPE Smart Storage Battery has been detected as failed. This component is a centralized battery backup unit that is required to prevent loss of data in case of a power outage and will be replaced to assure the safety and integrity of the data.

    The hardware component will be replaced by an on-site data center technician within the above time frame. We will post here as soon as the data center gives us an update.

    Thank you in advance for your patience and understanding. If you have any questions, please contact us.

    Update 28.09.17 19:45 CEST: The hardware replacement has been confirmed by the data center technician. We will begin to shutdown the VMs and the physical server in approx. 10 minutes.

    Update 28.09.17 19:55 CEST: The shutdown for the hardware replacement has been initiated. We'll post an update as soon as the server is back online or get an answer from the data center technician.

    Update 28.09.17 20:15 CEST: The hardware replacement is now in progress.

    Update 28.09.17 20:30 CEST: The HPE Smart Storage Battery has been successfully replaced. The server is back online, but it will be rebooted again shortly to apply updates.

    Update 28.09.17 20:37 CEST: Reboot is now in progress.

    Update 28.09.17 20:53 CEST: The maintenance has been successfully completed for all VMs except for the shared hosting server "Madison". The update for this server is still in progress and a final reboot is still pending.

    Update 28.09.17 20:58 CEST: The update for the shared hosting server "Madison" has been successfully applied and the server was rebooted.

    The hardware replacement and maintenance are now fully completed. Thank you again for your patience.

  • Date - 28.09.2017 20:00 - 28.09.2017 20:45
  • Last Updated - 28.09.2017 18:59
Routine Maintenance at myLoc Data Center DUS3 (Resolved)
  • Priority - Medium
  • Affecting Other - myLoc Data Center DUS3
  • Start Time: 26 September 2017 23:00 CEST (Tuesday)
    Estimated End Time: 27 September 2017 07:00 CEST (Wednesday)
    Estimated Duration: 8 hours

    Server / Equipment: Power supply at myLoc DUS3
    Type of Work: Routine maintenance, electrical load test
    Impact of Work: None expected
    Estimated Downtime: None expected

    The data center will carry out a routine electrical load test in their data center DUS3 in the night from 26.09.17 to 27.09.17.

    All technical systems will be checked to ensure that the power supply is guaranteed at all times. This work also serves to increase the supply safety. The data center does not expect any influence on the power supply.

    The data center has made extensive arrangements with the involved service providers and utilities to minimize any residual risk.

    You can follow the status of this maintenance here or on the status page of myLoc data center at http://www.myloc-status.de/en/1047/ 

    Update 27.09.17 06:14: The maintenance has been successfully completed.

  • Date - 26.09.2017 23:00 - 27.09.2017 07:00
  • Last Updated - 27.09.2017 12:52
Maintenance of the UPS at myLoc Data Center DUS2 (Resolved)
  • Priority - Medium
  • Affecting System - myLoc Data Center / DUS2 / UPS
  • On 20.09.2017 at 02:00 CEST, myLoc data center will carry out maintenance work on the UPS (Uninterrupted Power Supply) in the data center DUS2. No outage is expected, but a residual risk cannot be ruled out completely.

    Updates can be followed on the day of the maintenance on the status page of the data center at http://www.myloc-status.de/en/ 

    We will keep this page updated, should any issues be detected during or after the maintenance.

    Update 20.09.17 10:30 CEST: The maintenance of the UPS has been successfully completed.

  • Date - 20.09.2017 02:00 - 20.09.2017 04:00
  • Last Updated - 20.09.2017 08:30
Scheduled Maintenance and Server Reboot (Resolved)
  • Priority - High
  • Affecting Other - VPS Hypervisor "Manhattan" and Server "Madison"
  • Start Time: 12 September 2017 20:00 CEST (Tuesday)
    Estimated End Time: 12 September 2017 20:45 CEST (Tuesday)
    Estimated Duration: 45 minutes

    Server / Equipment: Shared Hosting Server "Madison", VPS hypervisor "Manhattan" and all related VMs
    Type of Work: Maintenance, Software Upgrade
    Impact of Work: Outage due to required server reboot
    Estimated Downtime: minimum 10 minutes, up to 45 minutes

    On the 12th of September 2017, between 20:00 and 20:45 CEST, we will install all available operating system and kernel updates on the VPS hypervisor "Manhattan" and the shared/reseller hosting server "Madison".

    These maintenance tasks are necessary to maintain stability and security of our servers, as the pending updates cover some important vulnerabilities and bug fixes. During this timeframe, the VPS hypervisor and all hosted VMs will be rebooted. This will result an outage of approximately 10 minutes under normal circumstances.

    Thank you in advance for your understanding. If you have any questions, please contact us.

    Update 12.09.17 19:53 CEST: The updates are now being installed.

    Update 12.09.17 20:03 CEST: Updates successfully installed. The VPS hypervisor and all VMs are going down for reboot now. These are expected to be back online within approximately 10 minutes.

    Update 12.09.17 20:15 CEST: The hypervisor and all VMs are now back online. The maintenance has been successfully completed.

    Thank you for your patience.

  • Date - 12.09.2017 20:00 - 12.09.2017 20:15
  • Last Updated - 12.09.2017 18:15
Scheduled Maintenance and Server Reboot - VPS Hypervisor "Avalon" (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • Start Time: 12 September 2017 07:00 CEST (Tuesday)
    Estimated End Time: 12 September 2017 07:45 CEST (Tuesday)
    Estimated Duration: 45 minutes

    Server / Equipment: VPS hypervisor "Avalon" and all related VMs
    Type of Work: Maintenance, Software Upgrade
    Impact of Work: Outage due to required server reboot
    Estimated Downtime: minimum 10 minutes, up to 45 minutes

    On the 12th of September 2017, between 07:00 and 07:45 CEST, we will install all available operating system and kernel updates on the VPS node "Avalon". These maintenance tasks are necessary to maintain stability and security of our servers, as the pending updates cover some important vulnerabilities and bug fixes. During this timeframe, the VPS hypervisor and all hosted VMs will be rebooted. This will result an outage of approximately 10 minutes under normal circumstances.

    Thank you in advance for your understanding. If you have any questions, please contact us.

    Update 12.09.17 07:20 CEST: The maintenance has been successfully completed. The VPS hypervisor and all VMs are back online after 8 minutes of downtime. Thank you for your patience.

  • Date - 12.09.2017 07:00 - 12.09.2017 07:45
  • Last Updated - 12.09.2017 07:56
Maintenance of the Routing Platform / Backbone (Resolved)
  • Priority - High
  • Affecting System - myLoc Data Center Backbone
  • On the 25th of August 2017 between 03:00 and 05:00 CEST, myLoc data center will carry out a maintenance of their backbone router. These maintenance tasks are necessary to improve stability and redundancy in the area where our servers are located. During this timeframe, our servers may experience high latency, packet loss and/or short outages over a few minutes.

    Thank you in advance for your understanding. We will update this status case as soon as the maintenance has been completed.

    The status of the maintenance can also be followed on the network status page of myLoc data center at http://www.myloc-status.de/en/1006/ 

    Update 25.08.17 04:57 CEST: The maintenance has been successfully completed. No outages were detected during this time.

  • Date - 25.08.2017 03:00 - 25.08.2017 05:00
  • Last Updated - 25.08.2017 03:34
HTTP(S) Service Failed (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • We are aware of an issue with the HTTP/S service on the server Madison and are currently investigating it. The websites are unavailable, but all other services are still functional so far.

    Updates will follow soonest possible. Thank you for your patience.

    Update 11.08.17 18:14 CEST: We have rebooted the server due to stability issues. The server should be back online within 5 minutes.

    Update 11.08.17 18:23 CEST: The server is still booting up, but much slower than usual. We won't interrupt the booting process, as this could cause even more issues.

    Update 11.08.17 18:38 CEST: The server and all websites/services are now back online. We'll keep monitoring the server closely and investigate the cause.

    Update 11.08.17 18:45 CEST: The issue might have been possibly caused by a kernel bug. We are now installing the latest kernel and all available updates, as an attempt to prevent another service/server crash. The server will need to be rebooted again shortly to apply the updates. We expect it to boot up normally this time, within 5 to 10 minutes at most.

    Update 11.08.17 18:52 CEST: The updates have been installed and the server was rebooted. No issues have been detected so far. All services should be online in about 2 minutes.

    Update 11.08.17 18:55 CEST: All services and websites are now online and the issue should be permanently fixed.

    Update 11.08.17 19:14 CEST: We will mark this case as resolved. No issues have been detected since the server was rebooted and all services are running optimally. The server remains monitored, as usual, and we will promptly investigate any issues that may arise.

    We're sorry for the inconvenience caused. Thank you for your understanding.

  • Date - 11.08.2017 18:10 - 11.08.2017 19:15
  • Last Updated - 11.08.2017 17:16
Partial Network Outage of IP Allocation (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center
  • We are aware of a network outage that partially affects some of our IP addresses and servers.

    We will reach out to myLoc data center and will let you know as soon as we have more information.

    We're deeply sorry for the inconvenience.

    Update 28.07.17 15:13 CEST: The outage was temporary. All IP addresses and/or servers are reported to be back online.

  • Date - 28.07.2017 15:02 - 28.07.2017 15:14
  • Last Updated - 28.07.2017 13:14
Scheduled Migration of the Server "Neptune" (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our ongoing efforts to improve our hosting service, maintain reliability and increase performance, all accounts and data from the server “Neptune” will be moved to a new server. The new server is equipped with two high-performance processors, faster RAM and Solid State Drives (SSD). These can enhance the overall server performance and speed of your website. It should also allow us to increase the resource limits in the future.

    We kindly ask you not to make any changes to your hosting account (incl. websites, emails, settings, databases, etc.) while the transfer is in progress. Once the data of your account is transferred to the new server, all changes that take place on the old server cannot be replicated to the new server.

    We will try to make the transfer as seamless as possible. A myLoc data center network engineer has scheduled the routing of the existing IP allocation to the new server for tomorrow at 09:00 CEST. This is when the entire transfer process should be fully completed.

    There will be no change of nameservers or IP addresses necessary. A temporary IP address will be assigned to the accounts only during the transfer, which will be reverted after our current IP allocation gets routed to the new server.

    You can read more detailed information and some frequently asked questions about the server transfer here.

    Thank you for your patience and understanding!

    Update 26.07.17 12:25 CEST: The hostname of the new server is "Madison" (madison.customwebhost.com). It's fully prepared to host the hosting accounts of the server "Neptune".

    Update 26.07.17 12:35 CEST: All cPanel backups located under the home folders have been moved to our off-site backup server. The cPanel transfer tool doesn't transfer these backups, so if you still require them, please contact us after the transfer is completed.

    Update 26.07.17 16:15 CEST: We've initiated a manual R1Soft backup process to have the most recent data if something unexpected happens during the transfer. It's expected to finish right before we start the transfer.

    Update 26.07.17 17:38 CEST: Backup completed. The server will be updated and rebooted. It should be back online within 5 to 10 minutes.

    Update 26.07.17 18:00 CEST: All preparations prior to the transfer have been successfully completed. We will now proceed with the transfer. During the transfer, it's highly recommended not to do any changes to your account and all related files, databases, emails, etc., unless absolutely necessary.

    Update 26.07.17 19:08 CEST: Accounts transfer in progress... 10% completed

    [status messages cleaned up]

    Update 27.07.17 05:52 CEST: Accounts transfer in progress... 90% completed

    Update 27.07.17 06:45 CEST: The data transfer is 100% complete. We've pointed the Neptune hostname and nameservers (neptune.customwebhost.com) to the new server Madison (madison.customwebhost.com). Please access the new server using these URLs:

    WHM: https://madison.customwebhost.com:2087
    cPanel: https://madison.customwebhost.com:2083
    Webmail: https://madison.customwebhost.com:2096

    If your domain points to our nameservers, your account and all services should be reachable on the new server already. However, you might have to clear your DNS cache first or wait for the DNS propagation to complete.

    We're waiting for the data center network engineer to route our IP subnets to the new server. This task is scheduled at 09:00 CEST. Then we can switch all accounts/domains from the temporary IP to their previous IPs. We'll post here once these steps are initiated.

    Update 27.07.17 08:20 CEST: We've identified an issue with the DirectoryIndex priority. cPanel has re-created the default "index.html" file under all accounts and it is loading as a priority for some domains. We'll try to find a safe way to delete this file from all accounts. In the meantime, you can manually delete the "index.html" file from your "public_html" folder. Sorry for the inconvenience.

    Update 27.07.17 08:38 CEST: The default "index.html" files have been removed.

    Update 27.07.17 09:48 CEST: The IP allocation of the server "Neptune" is now being transferred to the "Madison" server by the data center. The IPs will be unavailable until the routing is configured. This should only affect domains that use external DNS (not our nameservers). If your domain points to our DNS, it should work fine on the temporary IP.

    Update 27.07.17 09:50 CEST: The server must now be rebooted to apply the network changes. It should be back online within 5 to 10 minutes.

    Update 27.07.17 09:58 CEST: The server is back online and the IPs have been routed. We're now configuring them in cPanel and will then assign the cPanel accounts to their original IPs.

    Update 27.07.17 10:35 CEST: We're having some difficulties with cPanel, as it doesn't allow us to change the IPs in a batch. We're looking into this and will update you soonest possible.

    Update 27.07.17 10:43 CEST: We have found a way to switch the accounts with shared IPs aready. This is currently in progress and should take around 15 minutes.

    Update 27.07.17 11:31 CEST: Most accounts should now point to their usual IP address. Please remember to wait for the DNS to propagate or clear your DNS cache. We're still processing some accounts.

    Update 27.07.17 11:41 CEST: The final batch of IP changes is running right now. It should be done within 10 minutes.

    Update 27.07.17 11:50 CEST: The transfer should be finally completed. We're now doing some extensive check-ups to make sure that everything works properly.

    Please remember that it may take 1 to 24 hours for the DNS propagation to complete. You can try to speed-up or skip this process by clear your DNS cache.

    Update 27.07.17 14:21 CEST: We've checked almost all domains that were transferred and no issues have been detected.

    So far all reported issues were related to unfinished DNS propagation on behalf of the ISP. Therefore, if you're unable to access a domain, please try to clear your browser and DNS cache or try again in a few hours.

    The transfer is now considered to be completed. If you notice any issues, please report them to our technical support.

    Thank you for your patience and understanding. We hope that the new-generation server will serve your hosting requirements well.

  • Date - 26.07.2017 18:00 - 27.07.2017 12:00
  • Last Updated - 27.07.2017 14:34
Traffic Rerouted due to Failure of a Network Node (Resolved)
  • Priority - Medium
  • Affecting Other - myLoc DC - Network Node
  • A failure of a network node has just been reported by myLoc data center. The traffic is currently being rerouted, which can lead to increased latencies, but the servers should remain online.

    The myLoc technicians are already working on fixing the failed network node. If our monitoring systems detect any outages, we'll let you know immediately.

    Update 26.07.17 15:00 CEST: The fault has been identified and fixed. All traffic should be routed normally in approx. 5 minutes.

    Update 26.07.17 15:50 CEST: The traffic is now being routed normally again. No outage was recorded, all servers were online during this process.

  • Date - 26.07.2017 12:44 - 26.07.2017 15:50
  • Last Updated - 26.07.2017 13:52
Upgrade to EasyApache 4 (Resolved)
  • Priority - Medium
  • Affecting Server - Madison
  • Due to cPanel's deprecation of EasyApache 3, the server Madison will be upgraded to the latest supported version, EasyApache 4.

    This process may cause temporary 500 Internal Server errors and unavailability of the HTTP/S service, which should no longer occur once the upgrade is completely done. The migration process requires some manual steps and is estimated to take around 30 to 60 minutes.

    The same settings, modules and extension will be used as before. This will be only a conversion of the existing setup.

    22.07.17 17:40 CEST: The migration to EA4 has been initiated.

    22.07.17 17:55 CEST: The initial migration step has been completed. Installing all available updates, then the server will be rebooted to apply the updates.

    22.07.17 18:00 CEST: The updates have been installed and the server was successfully rebooted. Proceeding with the re-installation and/or adjustment of ModSecurity, MemCached, PHP and other related components.

    22.07.17 20:15 CEST: The migration to EasyApache 4, as well as the system update, have been successfully completed.

    Furthermore, the server's memory has been upgraded from 8 GB RAM to 32 GB RAM. This should result a significant increase in performance.

    If you notice any issues, please contact our technical support department. They will gladly assist you or look into any possible issues that might be caused by the migration to EA4. Thank you.

  • Date - 22.07.2017 17:30 - 22.07.2017 20:15
  • Last Updated - 22.07.2017 18:16
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center
  • There is currently a network outage at the myLoc data center that affects a large part of our servers.

    We will try to reach out to the data center and post more information soonest possible.

    We are extremely sorry for the inconveninece caused and kindly ask for your patience and understanding.

    Update 14.07.17 03:59 CEST: The data center is already aware of this issue and are working on restoring the network: http://www.myloc-status.de/en

    Update 14.07.17 04:20 CEST: All servers are currently back online.

    Update 14.07.17 04:49 CEST: There is a partial network outage that currently affects only the VPS node "Avalon".

    Update 14.07.17 05:14 CEST: The VPS node "Avalon" is back online. There are no current outages at the moment.

    Update 14.07.17 05:41 CEST: The VPS node "Avalon" is offline again.

    Update 14.07.17 05:48 CEST: In approx. 2 minutes, the data center will perform a reload of the central router to resolve the issues with the aggregation layers. The network issues should then be permanently resolved.

    Update 14.07.17 06:40 CEST: The central router reload is expected to be completed around 07:00 CEST.

    Update 14.07.17 07:55 CEST: The network is partially back online. Only the VPS node "Avalon" is still unreachable at the moment.

    Update 14.07.17 08:42 CEST: The entire myLoc data center is offline at the moment, along with all our servers. The core routers appear to be offline, leaving the data center with no connectivity. Unfortunately, we have no additional updates, as there is currently no communication with the data center except for their status page at http://www.myloc-status.de/en . This is of course unacceptable and we're sorry that we cannot provide a more concrete update right now.

    Update 14.07.17 08:57 CEST: The data center and all servers are back online. Awaiting an update from the data center.

    Update 14.07.17 09:01 CEST: Large part of our servers are still having network outages.

    Update 14.07.17 09:35 CEST: Outage is ongoing. The only update we have is that the technicians are still working on resolving the problem, but no ETA or other details are provided.

    Update 14.07.17 09:42 CEST: The core router has finally been reloaded. The technicians are now debugging and the network should be restored soon.

    Update 14.07.17 10:16 CEST: Some segments of the data center network have been restored, according to the latest update we have. The data center is still working on a permanent fix together with the vendors of the network equipment.

    Update 14.07.17 10:22 CEST: The data center is still in touch with the network equipment vendor to analyse the error. They are working together on a solution.

    Update 14.07.17 11:04 CEST: Brocade Support is currently working with the data center on the problem. We hope that the solution will be done as soon as possible. An exact time window can not yet be determined. We will keep you up to date.

    Update 14.07.17 11:43 CEST: The data center is still working on a solution.

    Update 14.07.17 11:57 CEST: The affected network segments are now gradually being put back into operation. In the next 60 minutes, important areas of the affected network segments should be accessible again.

    Update 14.07.17 12:23 CEST: A delay has just been announced. We kindly ask for a bit more patience.

    Update 14.07.17 12:32 CEST: The server "Neptune" is now back online.

    Update 14.07.17 13:03 CEST: The VPS node "Avalon" and all hosted VMs are now back online.

    Update 14.07.17 13:03 CEST: We're awaiting the restore of our last network segment. It is still in progress.

    Update 14.07.17 13:38 CEST: All network segments where our servers are located are now back online. No further downtime is expected, but we'll await a confirmation from the DC.

    Update 14.07.17 14:05 CEST: The DC is still working on the network. Short outages of a few seconds might possibly still occur within the next 2 hours. The DC will post a final update on their status page once the issue is permanently resolved. We will re-post it here afterwards.

    Once again, please accept our sincere apologies for the outage at myLoc data center. We'll request an official statement and analyse the measures that they will undertake to prevent network outages in the future. Should the measures not be adequate, we will definitely consider moving to a different, more reliable data center in the near future in order to maintain the quality of our own services.

    We will follow-up with an email regarding the data center's statement and possible measures during the course of this month.

    Update 14.07.17 19:20 CEST: No further outages have been detected since 13:30 CEST. The network continues to be stable so far and no further outages are expected or announced by the data center. This issue should be finally resolved. Thank you again for your patience and understanding, it is greatly appreciated!

    Update 14.07.17 21:45 CEST: A full root cause analysis will be provided by the data center in the course of next week, which we will analyse and forward to all clients.

  • Date - 14.07.2017 03:50 - 14.07.2017 13:38
  • Last Updated - 14.07.2017 19:46
Emergency Maintenance of Aggregation Switches at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting System - Aggregation Switches at myLoc DC
  • Estimated Duration: 2 Hours
    Equipment: Aggregation Switches of myLoc Data Center
    Type of Work: Emergency Maintenance
    Impact of Work: Brief latency, data packet loss and/or temporary outages

    During the above time-frame, the network team at myLoc Data Center will perform urgent maintenance work on their central network components (aggregation switches). Our servers may encounter brief latency, data packet loss and/or temporary outages during the maintenance.

    This should be the completion of the emergency maintenance that was performed last week, but couldn’t be completed due to unforeseen circumstances. The networking team will strive to keep the limitations as low as possible, but due to the extensive measures, unfortunately, it may not be possible to carry out the maintenance without any interruption.

    We will monitor the maintenance closely and document all updates here. The data center will also post updates on their own status page: myLoc Status Page

    We're sorry for any inconvenience this maintenance work may cause and appreciate your understanding. If you have any questions, please don't hesitate to contact us.

    Update 07.07.17 01:00 CEST: The maintenance has started. All our servers are currently still online and accessible.

    Update 07.07.17 02:37 CEST: An extension of 30 minutes for the maintenance duration has just been announced. No packet losses or outages were caused so far.

    Update 07.07.17 03:09 CEST: The VPS node "Avalon" and all hosted VMs are currently unreachable.

    Update 07.07.17 03:17 CEST: The VPS node "Avalon" and all hosted VMs are back online.

    Update 07.07.17 03:30 CEST: The VPS node "Avalon" and all hosted VMs are unreachable again. The maintenance should be done very soon.

    Update 07.07.17 03:45 CEST: The VPS node "Avalon" and all hosted VMs are back online. We don't expect further outages.

    Summary: Packet losses and short outages were detected for a total period of 24 minutes and were related to the VPS node "Avalon" only. All other servers were online without interruption and are currently running stable.

    Thank you again for your patience and understanding!

  • Date - 07.07.2017 01:00 - 07.07.2017 03:45
  • Last Updated - 07.07.2017 08:09
Emergency Maintenance of Aggregation Switches at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - myLoc Data Center
  • On July 1st, 2017 at 01:00 CEST, the myLoc data center will carry out an urgent maintenance to one of their aggregation switches for the data centers DUS1 and DUS3, where our servers are located.

    The data center expects the work to be carried out without interrupting the operation, but outages can occur during this process.

    We're sorry for any inconveninece the emergency maintenance work may cause.

    Update 01.07.17 01:05 CEST: Unfortunately, some of our servers are unreachable since 10 minutes. We'll update this case as soon as we have more information.

    Update 01.07.17 01:17 CEST: The on-site data center technicians are working on restoring the network soonest possible.

    Update 01.07.17 01:20 CEST: Our servers are now back online. The maintenance is still in progress.

    Update 01.07.17 02:12 CEST: The network has been interrupted again.

    Update 01.07.17 02:22 CEST: Our servers are now back online.

    Update 01.07.17 02:45 CEST: The network has been interrupted again. There seems to be an issue with the core backbone.

    Update 01.07.17 03:21 CEST: The data center currently has a problem with link aggregations between the core and aggregation layers. We'll keep you informed.

    Update 01.07.17 03:41 CEST: All servers are currently back online. We have a confirmation from the data center that the core network has been permanently fixed and no further outages should be expected.

    Once again we're really sorry for the inconvenience caused by these outages and kindly ask for your understanding.

  • Date - 01.07.2017 01:00 - 01.07.2017 03:41
  • Last Updated - 01.07.2017 02:53
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center
  • We are currently aware of an ongoing outage at the myLoc data center that affects a large part of our servers.

    We will try to reach out to the data center and post more information soonest possible.

    Update 25.06.17 23:34 CEST: This seems to be either a network outage or a power outage, as the data center helpdesk and telephone are unreachable as well. We'll try to contact them through other ways and find out the status of the issue.

    Update 25.06.17 23:43 CEST: The data center is already aware of the outage and are working on fixing it. One of the affected servers is back online. We hope that the other servers will follow soon.

    Update 25.06.17 23:45 CEST: All servers are currently back online. The outage was caused by a network fault. We're awaiting a confirmation whether this is a temporary or permanent solution.

    Update 25.06.17 23:58 CEST: No further outages are expected. The root of the issue was a defective Line Card, which has been replaced.

    We're sorry for the inconvenience caused by this outage. Thank you for your understanding.

  • Date - 25.06.2017 23:07
  • Last Updated - 25.06.2017 22:08
VPS Node Migration and New IP Address Allocation (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • Start Time: 02 February 2017 10:00 CET (Thursday)
    Estimated End Time: 02 February 2017 20:00 CET (Thursday)
    Approximate Duration: 10 Hours

    Equipment: VPS Hypervisor and Network
    Type of Work: VPS Migration and New IP Allocation
    Impact of Work: Network Downtime
    Estimated Downtime: Approx. 15 to 60 minutes per VPS

    Source VPS Node: Sierra
    Target VPS Node: Manhattan

    In our ongoing effort to ensure our clients are provided with the best technology and service, please be informed that we will be replacing our old VPS hypervisor “Sierra”. During this maintenance, we will transfer all Virtual Private Servers to a different VPS hypervisor (physical server).

    Important: The IP address allocation of your VPS will change permanently and the current IP address allocation will no longer be available after the migration is completed. Reason for this change is because the new VPS hypervisor is in a different area of the myLoc Data Center and we cannot transfer the current IP space. Your new IP allocation will be assigned and communicated during the migration process. If you need to know the new IP allocation before the migration, please reply to this email and we will reserve the IPs for your VPS already.

    We know this may cause some inconvenience and we are sorry for that. However, we also have some good news for you: This upgrade will result a significant performance increase and improved reliability of your VPS and the network. Our new VPS hypervisor is powered by HP Enterprise Server with the newest Intel Xeon E5 CPU generation, Solid State Drives (SSD) in a RAID-10 array, DDR4 ECC RAM and redundant power supplies.

    As we must carefully migrate each individual VPS, your server will be queued up for migration anytime during the maintenance window mentioned above. The downtime will depend on the size of your VPS. You will be updated with the status of the migration on the date of the migration, shortly before we proceed with your VPS.

    Once we finish the migration of your VPS, we will shut it down on the old VPS hypervisor and you will need to use the new IP allocation only with immediate effect after the migration.

    If you would like to schedule this migration at your preferred time, we can accommodate to it. For that you should send us your preferred date and time (date should be before 07th February, 2017) by replying to this email or opening a ticket at our Management department.

    We are committed to providing you with the best value for your money, and always working to improve even further. We thank you for your understanding and your continued business. If you have any questions or concerns in regards to this matter, please don’t hesitate to contact us.

    Update 02.02.17 20:30 CET: The migration of all VMs has been successfully completed. Please check your emails regarding the new IP allocation. The assigned IP addresses are also listed under "My Services". Thank you for your cooperation and understanding.

  • Date - 02.02.2017 10:00 - 02.02.2017 20:00
  • Last Updated - 02.02.2017 20:36
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 07.01.17 11:24 CET: The server is going down for reboot now.

    Update 07.01.17 11:29 CET: The server and all services/websites are now back online.

    Thank you for your patience and understanding.

  • Date - 07.01.2017 11:25 - 07.01.2017 11:35
  • Last Updated - 07.01.2017 11:29
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this VPS hypervisor.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 07.01.17 10:58 CET: The VPS hypervisor has been rebooted and should be back online within 10 minutes.

    Update 07.01.17 10:08 CET: The VPS hypervisor and all VMs are back online. The maintenance is now complete.

  • Date - 07.01.2017 10:45
  • Last Updated - 07.01.2017 11:08
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Manhattan
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this VPS hypervisor.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 07.01.17 10:29 CET: The VPS hypervisor has been shutdown and should be back online within 10 minutes.

    Update 07.01.17 10:39 CET: The VPS hypervisor and all VMs are back online. The maintenance is now complete.

  • Date - 07.01.2017 10:20 - 07.01.2017 10:40
  • Last Updated - 07.01.2017 10:40
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 07.01.17 10:00 CST: The updates have been installed and the server has been rebooted. All websites and services are back online.

  • Date - 07.01.2017 09:50 - 07.01.2017 10:00
  • Last Updated - 07.01.2017 10:16
Failed Solid-State Drive (SSD) / Degraded RAID Array (Resolved)
  • Priority - High
  • Affecting System - Manhattan (Node), Madison (VPS), Phantom (VPS)
  • A Solid-State Drive (SSD) has failed on the VPS hypervisor "Manhattan". Thanks to RAID-10 redundancy, the physical server and all hosted VMs are online and the services are currently unaffected.

    The myLoc data center has already been contacted and requested to replace the faulty SSD.

    The server has an HP Smart Array with Hot Swap support, meaning that the faulty SSD can be replaced while the system is running and without causing downtime. Once the SSD gets replaced, the RAID Array will rebuild automatically. This process will only cause slow/degraded performance until it finishes.

    All data has been backed up externally, right after the SSD was reported as failed. The only risk would be if another SSD would fail until the current SSD gets replaced and the RAID Array finishes rebuilding 100%.

    Update 02.01.17 22:25 CET: A staff member from the myLoc data center has confirmed our report and is currently consulting a technician regarding this issue.

    Update 03.01.17 02:25 CET: The faulty SSD is planned to be replaced during the course of this morning.

    Update 03.01.17 10:12 CET: The replacement of the SSD will take place in a few minutes.

    Update 03.01.17 10:25 CET: The SSD has been successfully replaced and the RAID Array is currently rebuilding. The estimated duration is approximately 3,5 hours.

    Update 03.01.17 11:38 CET: Current RAID rebuild status is at 29%.

    Update 03.01.17 12:07 CET: Current RAID rebuild status is at 78%.

    Update 03.01.17 12:33 CET: The RAID array has been completely rebuilt. No data loss or outage was caused during this process.

  • Date - 02.01.2017 21:25
  • Last Updated - 03.01.2017 12:44
Network Mainteneance and Upgrade at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting System - Backbone Router at myLoc Data Center
  • Start Time: 20 November 2016 01:00 CET (Sunday)
    Estimated End Time: 20 November 2016 03:00 CET (Sunday)
    Duration: 2 Hours

    Equipment: Backbone Router
    Type of Work: Hardware Upgrade
    Impact of Work: Network Downtime
    Estimated Downtime: Approximately 5 minutes, up to 30 minutes

    During the above window, the networking team of the myLoc data center, where our servers are located, will be performing hardware upgrades of their backbone router.

    The data center expects the impact of this change to be minimal; however, you may experience some short periods of downtime and latency lasting a few minutes while the hardware is upgraded. The networking team will strive to keep the limitations as low as possible, but due to the extensive measures, unfortunately, it is not possible to carry out maintenance completely without interruption.

    Please don't hesitate to contact support if you have any additional questions or concerns. You may do so by opening a ticket or by replying to this email.

    We're sorry for any inconvenience this maintenance work may cause.

  • Date - 20.11.2016 01:00 - 20.11.2016 03:00
  • Last Updated - 21.11.2016 00:01
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 28.10.16 23:43 CEST: The server is going down for reboot now.

    Update 28.10.16 23:47 CEST: The server has been rebooted. All services are starting up now.

    Update 28.10.16 23:48 CEST: All websites and services are back online.

    Thank you for your patience and understanding.

  • Date - 28.10.2016 23:40 - 28.10.2016 23:48
  • Last Updated - 28.10.2016 23:48
Server Neptune Unreachable (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The server Neptune is currently unreachable. We are investigating this issue and will post more information as soon as possible.

    Thank you for your patience and understanding.

    Update 16.10.16 13:25 CEST: The issue seems to be related to a network or power outage at the myLoc data center. It is still being investigated by their on-site technicians and we are awaiting their reply with more information.

    Update 16.10.16 13:34 CEST: The server is now back online. The outage was related, as presumed, to a network issue.

  • Date - 16.10.2016 12:41 - 16.10.2016 13:34
  • Last Updated - 16.10.2016 15:31
Planned Maintenance of the Uninterruptible Power Supply (Resolved)
  • Priority - Medium
  • Affecting Other - UPS1 and UPS2 at myLoc DUS3
  • As part of the regular maintenance of the uninterruptible power supply (UPS), myLoc data center will perform regular maintenance work on the UPS1 and UPS2 systems of the DUS3 data center. The maintenance is expected to begin on 22 September 2016 at 23:00 CEST and end the following morning on 23 September 2016 at 06:00 CEST.
     
    These works serve for improved reliability, therefore, the data center asks for your understanding regarding this expansion measure, in case there will be any unexpected outage.

    The data center technicians don't expect any impact on the power supply, as the maintenance work can be performed without interrupting the power supply. Nevertheless, there is still a minimal risk.
     
    The data center has taken extensive precautions with the participating service providers and power suppliers in order to prevent any impact on the running systems/servers.

  • Date - 22.09.2016 23:00 - 23.09.2016 06:00
  • Last Updated - 07.10.2016 10:31
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • Due to a partial network outage at the myLoc data center, the server Neptune is currently offline. Only the area where this server is located is affected. All other servers are currently online.

    We will try to reach the data center and post more information soonest possible.

    We're sorry for any inconvenience this outage may cause.

    Update 15.09.16 05:02 CEST: The network has been restored and the server is back online.

    Update 15.09.16 05:50 CEST: The server is offline again due to another network outage, unfortunately.

    Update 15.09.16 05:54 CEST: The network has been restored.

  • Date - 15.09.2016 04:47 - 15.09.2016 05:54
  • Last Updated - 15.09.2016 05:56
Replacing the emergency power supply at myLoc data center DUS3 (Resolved)
  • Priority - Medium
  • Affecting Other - myLoc Data Center DUS3
  • As part of a capacity expansion of the myLoc DUS3 data center, the emergency power supply will be replaced with a more powerful one. The installation is scheduled to take place on 08.09.2016 at 20:00 CEST and is expected to finish on 09.09.2016 at 01:00 CEST.

    These works serve for improved reliability, therefore, the data center asks for your understanding regarding this expansion measure, in case there will be any unexpected outage.

    The data center has taken extensive precautions with the participating service providers and power suppliers in order to prevent any impact on the running systems/servers.

    Update 09.09.16 05:32 CEST: The new power system has been installed and the emergency power is now ensured.

  • Date - 08.09.2016 18:00 - 09.09.2016 01:00
  • Last Updated - 09.09.2016 10:11
Scheduled Upgrade to PHP 5.6 (Resolved)
  • Priority - Low
  • Affecting System - Shared, Reseller and Managed Servers
  • With the PHP 5.5 branch reaching the end of its support time-frame, we will be upgrading all servers that are still running PHP 5.5 as the native version to PHP 5.6.

    The upgrade to PHP 5.6 is scheduled to take place on 02 September 2016.

    Most improvements in PHP 5.6 have no impact on existing code. There are a few incompatibilities and new features that should be considered, but overall the upgrade to PHP 5.6 should be seamless for almost all websites.

    What effects could the upgrade have on your website and scripts?

    If you have a rather old website and your scripts aren’t up-to-date, this might cause compatibility issues, where PHP errors and warnings would occur. These errors are normally logged in the file "error_log" inside the "public_html" folder and its sub-folders.

    How should you prepare for the upgrade?

    You should make sure that all your scripts are upgraded to the latest version available and are compatible with PHP 5.6. Not only is this required to assure compatibility with the latest PHP standards, but it’s also very important and essential in order to protect your website from any possible security vulnerabilities.

    Many popular scripts (WordPress, Joomla, Magento, etc.) can be easily updated using Softaculous, our free script auto-installer, which you can find inside cPanel under the Software/Services group. To import an existing installation, please follow the import guide and then run the upgrade by following the upgrade guide.

    If you're a developer or have a custom website programmed in PHP, there's an official guide for migrating from PHP 5.5. to PHP 5.6.

    How you can keep using an older, unsupported PHP version

    For all clients who aren't prepared for the upgrade to PHP 5.6 yet, we offer the possibility to enable an older PHP version for your account, with the help of CloudLinux HardenedPHP. If you still require PHP 5.5 until you upgrade your scripts/website, you can enable it under cPanel » Select PHP Version.

    However, please note that it's highly recommended to use the latest supported version of PHP for security and performance reasons. An older PHP version should only be enabled temporary, during the transitional phase while you upgrade your scripts/website.

    Should you have any questions or concerns in regards to this upgrade, please don’t hesitate to let us know.

    Update 01.09.16 20:45 CEST: The native PHP on all servers has been successfully upgraded from PHP 5.5 to PHP 5.6. This task was performed a few hours earlier than planned in order to prevent an interference with nightly tasks (backup, cPanel update, etc.). If you notice any issues, please double-check that your scripts are up-to-date and compatible with PHP 5.6. You can switch back to PHP 5.5 in cPanel under Select PHP Version if you need more time to upgrade your scripts.

  • Date - 01.09.2016 20:00 - 02.09.2016 01:00
  • Last Updated - 01.09.2016 20:47
UK Cluster 1 Unreachable (Resolved)
  • Priority - Medium
  • Affecting System - UK DNS/SMX Cluster (cluster1)
  • The UK Cluster 1 is unreachable and fails to boot. We've contacted the data center and await their manual intervention to restore the server.

    This server is used just for DNS and email redundancy, so the outage doesn't have any negative impact on the web hosting service. We still have the US Cluster available as redundancy.

    Update 29.07.16 11:18 CEST: The server is back online.

  • Date - 29.07.2016 08:42 - 29.07.2016 11:18
  • Last Updated - 29.07.2016 11:45
Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center
  • There is currently an outage at the myLoc data center, which affects all servers and services.

    We will update you as soon as we have more information.

    Update 19.07.16 18:50 CEST: The on-site data center technicians have confirmed the outage and are currently working on resolving the issue.

    Update 19.07.16 19:04 CEST: According to the data center, the outage is caused by a power outage. They are still working on restoring the service.

    Update 19.07.16 19:18 CEST: One of our VPS nodes is currently back online (Sierra).

    Update 19.07.16 19:21 CEST: The power has been restored in the majority of the data center areas. The remaining servers should be powered up by the on-site technicians soon.

    Update 19.07.16 19:30 CEST: The Avalon VPS node and all hosted VMs are now back online. The server Neptune remains offline, but should be up soon.

    Update 19.07.16 19:50 CEST: All servers, websites and services are back online. The outage should be permanently resolved.

    The data center will update their status at http://www.myloc-status.de

    We apologize for the inconvenience caused. Thank you for your patience and understanding.

  • Date - 19.07.2016 18:45 - 19.07.2016 19:40
  • Last Updated - 24.07.2016 18:31
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • We have noticed a network outage at the myLoc data center, affecting the area where the server Neptune is located. This server is currently unreachable. All other servers are online.

    Since the outage also affects the data center website and helpdesk, the on-site technicians should be already working on resolving this issue.

    We will try to reach the data center and will post all available updates soonest possible.

    We kindly ask you for your patience and apologize for the inconvenience caused.

    Update 16.06.16 11:55 CEST: The server is currently back online. We're awaiting confirmation from the DC if the network issue has been permanently resolved.

    Update 16.06.16 13:05 CEST: The data center has confirmed the outage of a core network backbone component, which has been permamently fixed in a timely manner. There should be no further outage.

    Thank you for your understanding.

  • Date - 16.06.2016 11:44 - 16.06.2016 11:55
  • Last Updated - 16.06.2016 13:06
LiteSpeed Web Server Failure (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • An issue with LiteSpeed Web Server (LSWS) has been detected, which prevents the websites from opening.

    We are looking into this issue already and will update you soonest possible.

    Update 10.06.16 01:58 CEST: We've temporary switched to Apache and will attempt to remove LSWS and re-install it afterwards. Some websites may return 500 Internal Server Error until we fix LSWS.

    Update 10.06.16 02:04 CEST: LSWS has been successfully re-installed and all websites open properly at the moment. Attempting to restore the previous configuration.

    Update 10.06.16 02:20 CEST: The configuration has been restored and there was no furher downtime. All websites continue to run properly.

    The root of the issue is a bug in the recently released LiteSpeed Web Server v5.1.6. We have reported it earlier today and the developers have advised us to install a new build, but obviously the bug still wasn't fixed, causing additional issues. We will await a permanent fix for this bug before we upgrade to LSWS 5.1.6 again and test the new version on smaller servers before applying the update on all servers.

    Update 10.06.16 16:00 CEST: LSWS becomes unresponsive again periodically for a yet unknown reason. We are currently investigating this issue.

    Update 10.06.16 16:19 CEST: The issue seems to be caused because of a DNS lookup timeout, which is usually done by ModSecurity. We're now temporary disabling ModSecurity to see if the problem persists.

    Update 10.06.16 17:03 CEST: The LSWS crashes were caused because of 3 ModSecurity rules that were performing RBL checks at SpamHaus in order to block requests originating from known IPs of spammers/hackers. These rules have been disabled, as if the SpamHaus service is unreachable (as it was now), our web server crashes because of the DNS/RBL lookup timeout.

    The web server is online and runs stable since over 30 minutes. This issue should be permanently resolved now, but we'll keep monitoring the server closely.

    We apologize for the inconvenience caused. Thank you for your patience and understanding.

  • Date - 10.06.2016 01:41 - 10.06.2016 17:04
  • Last Updated - 10.06.2016 17:06
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Thank you for your patience and understanding.

    Update 09.06.16 19:49 CEST: The node and all VMs are going down for reboot now.

    Update 09.06.16 19:59 CEST: There is currently an issue with Xen, which prevents the VMs from booting up. We are currently investigating it and will update you as soon as we can.

    Update 09.06.16 20:07 CEST: The issue has been permanently resolved and all VMs are back online. We're sorry for the inconvenience caused.

    The maintenance has been successfully completed.

  • Date - 09.06.2016 19:45 - 09.06.2016 20:07
  • Last Updated - 09.06.2016 20:29
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 09.06.16 19:25 CEST: The server is going down for reboot now.

    Update 09.06.16 19:30 CEST: All services are currently starting up.

    Update 09.06.16 19:32 CEST: All services and websites are back online. The maintenance has been successfully completed.

    Thank you for your patience and understanding.

  • Date - 09.06.2016 19:25 - 09.06.2016 19:32
  • Last Updated - 09.06.2016 19:37
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is the reason why the installation has been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 09.06.16 18:50 CEST: The node and all VMs are going down for reboot now.

    Update 09.06.16 18:59 CEST: The VPS node and all VMs are back online. The maintenance is now complete.

    Thank you for your patience and understanding.

  • Date - 09.06.2016 18:45 - 09.06.2016 19:00
  • Last Updated - 09.06.2016 19:18
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is why the installation and reboot have been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 31.05.16 13:42 CEST: The VPS node and all VMs are going down for reboot now.

    Update 31.05.16 13:48 CEST: The maintenance has been successfully completed. The node and all VMs are back online.

    Thank you for your patience and understanding.

  • Date - 31.05.2016 13:20 - 31.05.2016 13:50
  • Last Updated - 31.05.2016 13:49
VPS Node Avalon Unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Avalon
  • The VPS node Avalon is currently unreachable. At the moment it seems to be a network issue at the data center.

    We have opened a priority ticket at the data center and will update you with more information soonest possible.

    Thank you for your patience and understanding.

    Update 04.04.16 21:59 CEST: The VPS node is back online. It was caused by a temporary network issue. We're sorry for any inconvenience it may have caused.

  • Date - 04.04.2016 21:54 - 04.04.2016 21:59
  • Last Updated - 04.04.2016 22:20
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    Among the updates, there are several critical security updates, which is why the installation and reboot have been planned immediately.

    The kernel update requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    Update 01.04.16 15:45 CEST: The server is going down for reboot now.

    Update 01.04.16 15:50 CEST: The server has been rebooted. All services are currently starting up.

    Update 01.04.16 15:53 CEST: All services and websites are back online. The maintenance has been successfully completed.

    Thank you for your patience and understanding.

  • Date - 01.04.2016 15:45 - 01.04.2016 15:53
  • Last Updated - 01.04.2016 15:53
Software Upgrade Core-Router POP Interxion Düsseldorf (Resolved)
  • Priority - Low
  • Affecting Other - myLoc Data Center
  • During this period, myLoc data center will perform a software upgrade of their core router BRIP-A-DUS4 in their POP Intention Düsseldorf.
    Thus, the links to the following carriers will not be available:

    - TeliaSonera DUS
    - DTAG German Telekom DUS
    - Hibernia Networks DUS

    Before starting maintenance, myLoc will redirect traffic through alternate paths. This can lead in some cases to higher latencies and modified routing.

  • Date - 31.03.2016 04:00 - 31.03.2016 05:00
  • Last Updated - 01.04.2016 15:19
Scheduled Server Reboots to Patch CVE-2015-7547 (Resolved)
  • Priority - High
  • Affecting System - All Servers
  • All our servers will need to be rebooted in order to apply the patch for the CVE-2015-7547 security vulnerability.

    CVE-2015-7547 is a critical vulnerability in glibc affecting any versions greater than 2.9. The DNS client side resolver function getaddrinfo() used in the glibc library is vulnerable to a stack-based buffer overflow attack. This can be exploited in a variety of scenarios, including man-in-the-middle attacks, maliciously crafted domain names, and malicious DNS servers.

    This will cause an outage between 2 and 8 minutes for all servers.

    We're sorry for any inconvenience this may cause. Thank you for your understanding.

  • Date - 21.02.2016 05:00 - 21.02.2016 07:30
  • Last Updated - 21.02.2016 07:19
Packet Loss - Managed VPS Hosting (Resolved)
  • Priority - Medium
  • Affecting Server - Avalon
  • We have noticed a slight packet loss (up to 5%) on one of our managed VPS hypervisors (Avalon). All other servers are unaffected.

    The data center technicians are working on the network in the area where the server Avalon is located. They are aware of slower network performance and packet loss in this area and are working on fixing this problem permanently.

    Updates will posted once available. We're sorry for any inconvenience this may cause.

    Update 27.01.16 00:50 CET: At this moment we no longer notice packet losses from various locations. We're still awaiting the analysis of the data center, but so far it appears that the problem has been resolved.

    Update 27.01.16 09:40 CET: We've noticed a packet loss below 3% again this morning. The data center technicians are currently working on resolving this issue permanently.

    Update 27.01.16 10:00 CET: The issue has been permanently resolved. It was caused due to a damaged fiber optic cable, which has been identified and replaced.

  • Date - 26.01.2016 15:00 - 27.01.2016 10:00
  • Last Updated - 29.01.2016 21:47
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center
  • We have noticed a network issue affecting the entire data center and all our servers. This outage is probably related to the recent maintenance of their access network.

    We will try to reach the data center and will post all available updates soonest possible.

    We kindly ask you for your patience and understanding and apologize for the inconvenience caused.

    Update 26.01.16 11:40 CET: The data center is already aware of the issue and is working at highest priority to fix the network. They will post updates on their own network status page at http://www.myloc-status.de

    Update 26.01.16 11:45 CET: All servers except for Neptune are back online. This server is located in a different part of the data center. We expect it to be back online soon.

    Update 26.01.16 12:15 CET: The server Neptune and all other servers are back online. We're awaiting a confirmation from the data center if the problem was permanently resolved.

    Update 26.01.16 12:38 CET: The problem should be permanently resolved in all areas where our servers are located.

    Once again we're sorry for the inconvenience and thank you for your understanding.

  • Date - 26.01.2016 11:30 - 26.01.2016 12:38
  • Last Updated - 27.01.2016 09:31
Maintenance Work on Access Network at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - Access Network - myLoc Data Center
  • As part of the regular maintenance of the network connection in the colocation area, myLoc data center will perform maintenance work on 25 January 2016 between 23:00 CET and 06:00 CET.

    During this maintenance, software updates will be installed on multiple network components. This may lead to restrictions in the availability of our systems for up to 10 minutes during the mentioned period.

    We apologize for any inconvenience that may be caused by the maintenance and thank you in advance for your understanding. If you have any questions, please don't hesitate to contact us.

  • Date - 25.01.2016 23:00 - 26.01.2016 09:47
  • Last Updated - 26.01.2016 08:28
Maintenance Work on Access Network at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - Access Network - myLoc Data Center
  • As part of the regular maintenance of the network connection in the colocation area, myLoc data center will perform maintenance work on 20 January 2016 between 23:00 CET and 06:00 CET.

    During this maintenance, software updates will be installed on multiple network components. This may lead to restrictions in the availability of our systems for up to 10 minutes during the mentioned period.

    We apologize for any inconvenience that may be caused by the maintenance and thank you in advance for your understanding. If you have any questions, please don't hesitate to contact us.

  • Date - 20.01.2016 23:00 - 21.01.2016 06:00
  • Last Updated - 26.01.2016 08:28
UK DNS/SMX Cluster Temporary Unavailable (Resolved)
  • Priority - Low
  • Affecting System - UK DNS Cluster (cluster1)
  • The UK DNS and SMX cluster will be temporary offline for up to 12 hours while we re-install the operating system and re-configure the server.

    This outage will not cause any disruption for your websites and emails, as the cluster is only used for redundancy. The US DNS/SMX cluster remains online in case of an outage and the main servers continues to work as expected.

    If you have any questions, please contact our customer service department. Thank you for your understanding.

    Update 25.01.16 16:27 CET: The UK DNS and SMX cluster has been successfully re-installed and re-configured.

     

  • Date - 25.01.2016 14:30 - 25.01.2016 16:27
  • Last Updated - 25.01.2016 16:28
Scheduled Maintenance - Kernel Update and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we will install the latest kernel version and all available software updates on this server.

    The kernel update requires a server reboot, which will result an outage of approximately 10 minutes.

    Thank you for your understanding.

    Update 11.11.15 01:37 CET: The server is going down for reboot now.

    Update 11.11.15 01:43 CET: The maintenance has been successfully completed. All websites and services are back online.

  • Date - 11.11.2015 01:35 - 11.11.2015 01:43
  • Last Updated - 11.11.2015 01:43
Temporary Outage of LiteSpeed Web Server (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • A temporary outage caused due to a license conflict has caused LiteSpeed Web Server to become unavailable for approximately 20 minuntes.

    We have requested the license to get reissued and were able to resolve this issue.

    We apologize for the inconvenience and will try to work with the license vendor to prevent this issue from happening in the future.

  • Date - 31.10.2015 13:24 - 31.10.2015 13:44
  • Last Updated - 31.10.2015 21:23
Routing issue at myLoc Data Center (Resolved)
  • Priority - High
  • Affecting Other - myLoc Data Center
  • During the mentioned period, there was a routing issue at the myLoc data center with the carrier Cognet Communications, causing our servers to become unreachable from certain locations.

    The data center has decommissioned the respective connection and the routing is stable again.

    We're sorry for any inconvenience this may have caused.

  • Date - 28.10.2015 06:35 - 28.10.2015 06:47
  • Last Updated - 28.10.2015 11:20
Unexpected Outages of New York DNS/SMX Cluster (Resolved)
  • Priority - Medium
  • Affecting System - New York DNS Cluster
  • Over the past week, we have experienced multiple short outages of the New York DNS/SMX Cluster. Our investigations have shown that these issues are caused by the provider where the cluster is located.

    We have reported these outages to the provider and expect them to provide a permanent solution.

    The DNS/SMX clusters are used for redundancy only, in case the main hosting servers become offline. Thus, an outage of a DNS/SMX cluster is not critical and doesn't affect any running services, as the DNS and Mail services continue to work properly through the main server and our first cluster in London.

    Update 26.09.2015 13:42 CEST: We have received a statement that the outages were caused due to multiple DDoS attacks and a failure of the power supply. Both of these issues have been resolved today.

  • Date - 26.09.2015 08:30 - 26.09.2015 13:44
  • Last Updated - 26.09.2015 13:44
Planned Server Reboot(s) due to Package Conflicts (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • We have identified package conflicts on the hypervisor "Avalon" due to an update that was unable to finish correctly. This causes issues with the operating system and virtualization platform (Xen), which makes some important functions unusable.

    Our system administrators will try to remove and re-install the affected packages. One or more server reboots will be necessary to complete this task, which will cause a downtime of approximately 5 to 10 minutes. The hosted VMs will be offline during this time, unfortunately.

    We apologize for any inconvenience caused, and expect to resolve this issue shortly.

    Update 24.09.2015 03:00 CEST: The duplicate packages have been removed and re-installed. There are no longer conflicts reported by the OS updater. There is still a residual issue with Xen that the system admins are investigating.

    Update 24.09.2015 18:00 CEST: The Xen issue should be resolved with a server reboot in order to apply the recent package re-installs. Our system admins will reboot the hypervisor within the next hour and investigate if the issue persists.

    Update 24.09.2015 19:32 CEST: The hypervisor is going down for reboot now.

    Update 24.09.2015 19:44 CEST: The hypervisor and all VMs have been rebooted and are back online. We're still investigating if there are any persisting issues.

    Update 24.09.2015 20:30 CEST: All issues have been permanently resolved. We were able to limit the downtime of this entire process to 10 minutes.

    Thank you for your patience and understanding.

  • Date - 23.09.2015 18:00 - 24.09.2015 20:30
  • Last Updated - 26.09.2015 08:30
Upgrade to PHP 5.5 (native) (Resolved)
  • Priority - Medium
  • Affecting System - Shared, Reseller and Managed Servers
  • With the PHP 5.4 branch nearing the end of its support time-frame on 14 September 2015, we will be upgrading all servers that are still running PHP 5.4 to PHP 5.5.

    The upgrade to PHP 5.5 is scheduled to take place on 01 September 2015. A future upgrade to PHP 5.6 will presumably follow next year in June 2016.

    How does this affect your website and scripts? If you have a rather old website and your scripts aren’t up-to-date, this might cause compatibility issues, where PHP errors and warnings would occur.

    Should you still require PHP 5.4 until you upgrade your scripts and website, you can enable it under cPanel » Select PHP Version, but please note that we will disable PHP 5.4 completely on 14 October 2015, or as soon as a security vulnerability is discovered; whichever comes first.

    How should you prepare for the upgrade? You should make sure that all your scripts are upgraded to the latest version available and are compatible with PHP 5.5. Not only is this required to assure compatibility with the latest PHP standards, it’s also very important and essential in order to close any possible security vulnerabilities.

    Many popular scripts (WordPress, Joomla, Magento, etc.) can be easily updated using Softaculous, our free script auto-installer, which you can find inside cPanel under the Software/Services group. To import an existing installation, please follow this guide and then upgrade it by following this guide.

    Update 01.09.15 13:30 CEST: The upgrade process to PHP 5.5 has been started. This may cause temporary 500 Internal Server Errors until the upgrade is finished. Older scripts that don't support PHP 5.5 will return errors, warnings or won't function at all. Please upgrade the scripts to the latest available version in this case.

    Update 01.09.15 14:00 CEST: The upgrade to PHP 5.5 as the native version has been completed. If you still require PHP 5.4, it's still be available until 14 Oct. 2015 in cPanel » Select PHP Version, after which we plan to remove it completely.

    As always, please keep your scripts up-to-date on a regular basis, in order to preserve compatibility with the latest PHP version and prevent security issues.

  • Date - 01.09.2015 13:30 - 01.09.2015 14:00
  • Last Updated - 01.09.2015 14:18
Scheduled Server Reboot - Kernel Update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we are going to upgrade the kernel and install all available updates on the server Neptune.

    This process requires a server reboot, which will lead to an outage of approximately 5 to 10 minutes.

    We're sorry for any inconvenience this may cause.

    Update 14.08.15 18:02 CEST: The server is going down for reboot now.

    Update 14.08.15 18:07 CEST: Reboot completed. All services are now back online.

    Thank you for your patience.

  • Date - 14.08.2015 18:00 - 14.08.2015 18:07
  • Last Updated - 14.08.2015 18:07
Failed system update - Reboot possibly necessary (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • While installing system updates, the update process has failed without finishing the transaction. This has left some system packages duplicated and now there's a conflict with the update manager.

    Our system administrators are trying to clean-up the duplicated packages and install the remaining updates.

    Unfortunately, this will probably require a system reboot, which will cause a downtime of approximately 5 to 10 minutes.

    Update 13.08.15 17:43 CEST: We are still looking into this issue. So far it doesn't affect any services and we'll try to fix this issue seamlessly.

    Update 14.08.15 02:15 CEST: The root of this issue has been identified. The HP monitoring services cannot stop, so the update hangs while trying to stop these services. Our system administrators are attempting to fix or remove the HP monitoring services so the remaining updates can install. We hope and try everything possible to resolve this issue without causing any downtime.

    Update 14.08.15 08:28 CEST: We have removed the duplicate packages, removed the conflicting HP monitoring service and successfully installed all updates. Among the updates there was also a Xen update, so we will still need to schedule a reboot in order to apply the update. The exact date and time will be posted soon.

    Update 14.08.15 14:05 CEST: The VPS node will be rebooted in 5 minutes and should be back online within 10 minutes.

    Update 14.08.15 14:12 CEST: All updates have been successfully installed. The VPS node and all VMs have been successfully rebooted.

    Thank you for your patience and understanding.

  • Date - 13.08.2015 14:20 - 14.08.2015 14:12
  • Last Updated - 14.08.2015 14:33
Scheduled Maintenance - Kernel and Xen Upgrade (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • The VPS node Sierra and all related VMs will be shortly unavailable while we install the latest kernel and all available software updates.

    A reboot will be necessary, which will take approximately 10 minutes.

    Update 13.08.15 12:47 CEST: The VPS node has been shutdown and should be back online within 10 minutes.

    Update 13.08.15 12:52 CEST: The maintenance has been completed. All VMs are back online and everything should work properly.

    Thank you for your understanding.

  • Date - 13.08.2015 12:10 - 13.08.2015 12:52
  • Last Updated - 13.08.2015 14:26
Scheduled Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The server Neptune will be scheduled for reboot in order to have a ploop image unmounted, which was recently installed by OptimumCache.

    All services will be shortly unavailable for approximately 5 minutes.

    We're sorry for any inconvenience this may cause.

    Update 29.07.15 01:07 CEST: The server has been rebooted and all services are back online.

    Thank you for your understanding.

  • Date - 29.07.2015 01:00 - 29.07.2015 01:07
  • Last Updated - 29.07.2015 01:18
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center (DUS1 and DUS3)
  • We are aware of a network issue affecting the data center and most of our servers. We are already investigating and will post updates soonest possible.

    We kindly ask you for your patience and understanding.

    Update 22.07.15 01:00 CEST: One of our servers (Sierra) is online again. We hope that the rest will follow soon.

    Update 22.07.15 01:10 CEST: The data center has confirmed on their status page that there's currently a technical issue.

    Update 22.07.15 01:21 CEST: An update has been posted by the data center that it's a technical issue affecting the network of the data centers DUS1 and DUS3. Unfortunately, the shared/reseller and managed VPS hosting servers are located in both of these data centers. The positive news is that this is NOT a Distributed Denial of Service attack, as we initially presumed. The on-site technicians are already working on fixing the network issue.

    Update 22.07.15 01:58 CEST: All servers are back online, but we are unaware yet whether this is a permanent or temporary solution.

    Update 22.07.15 02:01 CEST: The network issue has been permanently resolved. All servers are online.

    We're sorry for the inconvenience caused by this outage and hope that the data center will take all necessary measures to prevent it in the future.

  • Date - 22.07.2015 00:40 - 22.07.2015 02:01
  • Last Updated - 22.07.2015 02:58
Scheduled Emergency Maintenance (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • On 20 July 2015 between 06:00 and 14:00 CEST, the myLoc data center will perform urgent maintenance work on one of their server segments. This will cause a downtime of approximately 60 minutes for the VPS hypervisor "Avalon", along with all hosted virtual servers.

    During this time, we will not have access to the server for approximately 60 minutes. In addition, the server will be disconnected from the power supply.

    As soon as we receive any updates from the data center, we will post them here on the network status page.

    We apologize for the inconvenience caused.

    Update 20.07.15 11:44 CEST: The server has been powered off. We will update the status once the server is back online, within approximately 60 minutes.

    Update 20.07.15 12:23 CEST: The hypervisor and all hosted virtual servers are now back online.

    Thank you for your patience and understanding.

  • Date - 20.07.2015 06:00 - 20.07.2015 12:25
  • Last Updated - 20.07.2015 12:25
LiteSpeed Web Server Major Upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We will upgrade the web server (LiteSpeed Web Server) to the latest version, which is a major release. Sometimes this may possibly cause some temporary issues with some websites, which we will attempt to fix immediately after the upgrade.

    The upgrade will bring many performance benefits and support for the latest HTTP/2 and SPDY protocols. More information is available at http://blog.litespeedtech.com/2015/04/17/lsws-5-0-is-out-support-for-http2-esi-litemage-cache/

    Our apologies for any inconvenience the upgrade may cause.

    Update 18.07.15 08:12 CEST: The upgrade to LSWS 5.0 has been successfully completed. We have not experienced any issues and all websites seem to work properly.

    Please contact our technical support department if you notice any issues that might be caused by the upgrade. Thank you.

  • Date - 18.07.2015 07:45 - 18.07.2015 08:13
  • Last Updated - 18.07.2015 08:13
Server Neptune Unresponsive (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • We are currently aware of an outage that affects the server Neptune. We are already investigating this issue and will try to resolve it soonest possible.

    Thank you for your patience and sorry for the inconvenience.

    Update 17.07.15 08:34 CEST: The server has been rebooted and should be back online within approximately 10 minutes.

    Update 17.07.15 08:38 CEST: The server is back online, but another reboot is necessary, as the kernel has been upgraded. All services should be fully operational afterwards.

    Update 17.07.15 08:44 CEST: The server has been restarted. All services are back online and fully operational.

  • Date - 17.07.2015 08:32
  • Last Updated - 17.07.2015 08:49
High Load on Server Neptune (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We have been notified that the server Neptune is extremely overloaded. We're currently investigating this issue and will try to resolve it soonest possible.

    Thank you for your patience.

    Update 01.07.15 13:10 CEST: We have identified the process that was causing the overload and terminated it. All services have been restarted and should work properly again.

    We're sorry for any inconvenience this may have caused. Please contact us if you notice further issues.

  • Date - 01.07.2015 13:00 - 01.07.2015 13:15
  • Last Updated - 01.07.2015 13:23
Network Outage Affects VPS Hypervisor Sierra (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The VPS hypervisor "Sierra" is currently unreachable. We are investigating the cause of this issue and will update the status soonest possible.

    Update 06.06.15 06:48 CEST: This appears to be a network issue at the data center, as the network gateway is also unreachable. We have opened an emergency support ticket at the data center and await the on-site technicians to fix the problem. Updates will follow as soon as we have more information.

    Update 06.06.15 06:57 CEST: The data center technicians have confirmed the network issue, which affects multiple servers. They are already working at highest priority on a fix.

    Update 06.06.15 07:03 CEST: The network issue seems to be resolved, as the server is back online, but there are still some short outages on certain IPs.

    Update 06.06.15 08:05 CEST: The technicians are still working on a permanent solution. Most servers and IP addresses are back online, but some are still experiencing outages.

    Update 06.06.15 09:53 CEST: The root of this issue was caused because of a massive DDoS attack towards the data center's network. The attack has been mitigated and there should be no further issues. All servers and IPs are back online.

    Thank you for your patience and understanding.

  • Date - 06.06.2015 06:39 - 06.06.2015 09:53
  • Last Updated - 06.06.2015 11:02
Scheduled Security Maintenance - VENOM Patch (Resolved)
  • Priority - High
  • Affecting Other - Xen VPS Hypervisors
  • On 16 May 2015 at 19:30 CEST, we will start patching for the latest security flaw called VENOM.

    VENOM (CVE-2015-3456) is a security vulnerability in the virtual floppy drive code used by many computer virtualization platforms (including Xen). This vulnerability may allow an attacker to escape from the confines of an affected virtual machine (VM) guest and potentially obtain code-execution access to the host. Absent mitigation, this VM escape could open access to the host system and all other VMs running on that host, potentially giving adversaries significant elevated access to the host’s local network and adjacent systems.

    Once we’ve already applied the VENOM patch on all hypervisors, we’ll need to stop/start all VMs in order for the security patch to take effect. The outage is estimated to take between 5 and 15 minutes.

    Thank you for your understanding and patience during this time. If you have any questions or concerns, please contact us.

    Update 16.05.15 19:45 CEST: All Xen hypervisors have been patched and all VMs have been restarted. The maintenance is now complete.

  • Date - 16.05.2015 19:30 - 16.05.2015 19:45
  • Last Updated - 16.05.2015 19:46
Maintenance - DUS1 - Air Conditioning and CHP (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • From 03 May 15 to 08 May 15 the data center will perform maintenance on their air conditioning, thermal power station and combined heat and power unit (CHP) in the DUS1 area. Some of our servers are located in this area.

    The maintenance work will begin on 03 May 15 at 23:00 CEST and is expected to last until 08 May 15 at 16:00 CEST.

    The data center strives to complete the maintenance as soon as possible, without impact on our systems. Thank you in advance for your understanding in case this would lead to any outage after all.

  • Date - 03.05.2015 23:00 - 08.05.2015 16:00
  • Last Updated - 11.05.2015 14:03
PHP Security Update (Resolved)
  • Priority - Medium
  • Affecting System - Shared, Reseller and Managed Servers
  • On 22.04.2015 we will be updating PHP and related components on all managed servers. The updates address some important security vulnerabilities, thus we will be updating all servers soonest possible.

    Accounts hosted on servers running CloudLinux might experience temporary 500 Internal Server Errors until the updated components replicate to the CageFS containers of all accounts. This process usually takes between 2 and 15 minutes.

    Thank you for your understanding.

    Update 22.04.2015 11:20 CEST: The update has been successfully installed on all servers. If you notice any issues, please contact our technical support department.

  • Date - 22.04.2015 10:30 - 22.04.2015 11:20
  • Last Updated - 22.04.2015 11:20
Network Issue Affecting VPS Node Sierra (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • We are aware of a network issue that affects the VPS hypervisor Sierra. Some IP ranges are unreachable, which points out that the root of the issue on the data center side.

    The data center has been contacted and we're awaiting their reply. Updates will be posted as soon as we have more information.

    We're sorry for any inconvenience this issue may cause.

    Update 03.04.15 18:29 CEST: The data center has confirmed the network issue on their behalf and are already working on fixing it.

    Update 03.04.15 18:38 CEST: The network issue seems to be resolved. All servers are back online since 3 minutes.

    Thank you for your patience and understanding.

  • Date - 03.04.2015 18:08 - 03.04.2015 18:38
  • Last Updated - 03.04.2015 18:39
Planned Maintenance - Xen Security Updates (Resolved)
  • Priority - High
  • Affecting Other - Xen Virtual Servers
  • We are going to install the latest Xen security updates on all VPS hypervisors, which require reboot. This will result in a period of outage for all virtual servers for approximately 5 to 15 minutes.

    This maintenance is required to provide patches and updates, addressing recent advisory notices for the Xen virtualization technology used in our virtual server network. Xen is commonly used in cloud and virtual machine hosting, and a number of our competitors have or will be carrying out similar updates.

    Thank you for your patience and understanding.

    Update 02.04.15 17:48 CEST: All VPS hypervisors have been successfully updated and rebooted.

  • Date - 02.04.2015 17:30 - 02.04.2015 17:48
  • Last Updated - 02.04.2015 17:50
Schueduled Maintenance - PHP Security Update (Resolved)
  • Priority - Medium
  • Affecting Other - Managed Servers
  • Multiple security updates have been released today for PHP. We are going to update all servers to the latest PHP version.

    On servers running CloudLinux, the update process may cause temporary 500 internal server errors. The errors should be auto-fixed once the updated PHP files are updated across all CageFS containers for each hosting account.

    We estimate the update to take between 5 and 20 minutes. Thank you for your understanding.

    Update 25.03.15 01:15 CET: The update is in progress on all servers.

    Update 25.03.15 01:40 CET: All servers have been successfully updated.

    If you notice any issues, please let us know. Thank you.

  • Date - 25.03.2015 01:10 - 25.03.2015 01:40
  • Last Updated - 25.03.2015 01:40
Possible Power Outage Due To Sun Eclipse (Resolved)
  • Priority - Low
  • Affecting Other - myLoc Data Center (Düsseldorf, Germany)
  • As you may have heard from different media channels, on 20 March 2015 between 10:35 and 10:47 CET, a total sun eclipse will occur across Europe.

    This natural event can lead to power outages, which might have an impact on many infrastructures who rely on electricity, including data centers. Although all data centers we work with have UPS and emergency power plants to bridge power failures, there is still a very small probability of an outage.

    Be assured that we and the on-site data center technicians will be ready to undertake all possible measures in case anything happens because of this natural event.

  • Date - 20.03.2015 10:35 - 20.03.2015 10:47
  • Last Updated - 20.03.2015 15:02
Scheduled Xen Security Update and Reboot (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • The VPS hypervisor Sierra will be updated to the newest Xen version, which includes several important security updates. This requires a server reboot, which will result an outage of approximately 5 to 10 minutes.

    We're sorry for any inconvenience this may cause.

    Update 17.03.15 09:59 CET: The update has been successfully installed and all VMs are back online.

    Thank you for your patience and understanding.

  • Date - 17.03.2015 09:55 - 17.03.2015 09:59
  • Last Updated - 17.03.2015 09:59
Scheduled Xen Security Update and Reboot (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • The VPS hypervisor Sierra will be updated to the newest Xen version, which includes several important security updates. This requires a server reboot, which will result an outage of approximately 10 minutes.

    We're sorry for any inconvenience this may cause.

    Update 16.03.15 21:07 CET: The update has been successfully installed and all VMs are back online.

    Thank you for your patience and understanding.

  • Date - 16.03.2015 21:00 - 16.03.2015 21:07
  • Last Updated - 16.03.2015 21:15
Partial Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center
  • Due to a network outage at myLoc data center, some servers are currently unavailable.

    The data center will post updates on their network status page at http://www.myloc-status.de as soon as they have more information.

    We're sorry for the inconvenience caused by this outage.

    Update 11.03.15 19:25 CET: All servers are currently back online. We're awaiting the confirmation whether the network issue was permanently fixed.

    Update 11.03.15 19:34 CET: The network outage is ongoing again, unfortunately. Updates will follow once we receive more information.

    Update 11.03.15 20:15 CET: All servers are back online, but the network still seems a bit unstable. The on-site data center technicians are doing their best to fix the network issue permanently.

    Update 11.03.15 20:27 CET: The root of the issue was found and fixed permanently. The data center technicians are working on analysing the cause.

    Update 11.03.15 22:35 CET: The data center has confirmed again a permanent solution. All network segments are online and stable again.

    We will consider this issue permanently fixed for now. Our sincere apologies again for the inconvenience.

  • Date - 11.03.2015 19:03 - 11.03.2015 20:32
  • Last Updated - 11.03.2015 22:36
Scheduled Maintenance of Access Network (Resolved)
  • Priority - Medium
  • Affecting Other - myLoc Data Center - Access Network
  • As part of an additional upgrade of the Access Network within the area where our VPS hypervisors are located, the myLoc data center will perform planned maintenance work on 04 March 2015 between 01:00 AM and 04:00 AM CET.

    In the context of urgently needed maintenance, software updates and configuration changes of key network components will be performed, which can lead to multiple temporary disruptions or limitations of the network connections. The on-site data center technicians will strive to keep the downtime to a minimum.

    If necessary, please forward this information to your clients/customers. We apologize for any circumstances caused by the maintenance and the short notice and thank you very much for your understanding. Should you have any questions or concerns, please don't hesitate to contact us.

    Update 04.03.15 01:31 CET: The maintenance has been initiated.

    Update 04.03.15 04:21 CET: The data center technicians have confirmed the completion of the maintenance.

    Update 04.03.15 05:24 CET: There are some network issues following the maintenance that affect the VPS hypervisor "Sierra". The data center technicians are working on resolving this issue at highest priority.

    Update 04.03.15 08:01 CET: The network issue should now be permanently resolved.

  • Date - 04.03.2015 01:00 - 04.03.2015 08:01
  • Last Updated - 04.03.2015 09:12
Scheduled Kernel Update and Reboot (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • The VPS hypervisor Sierra will be updated to the newest kernel version. This requires a server reboot, which will result an outage of approximately 10 minutes.

    We're sorry for any inconvenience this may cause.

    Update 24.02.15 01:52 CET: The hypervisor and all VMs are back online. The maintenance is complete.

    Thank you for your patience and understanding.

  • Date - 24.02.2015 01:45 - 24.02.2015 01:53
  • Last Updated - 24.02.2015 01:53
PHP Security Update (Resolved)
  • Priority - Medium
  • Affecting System - cPanel Servers
  • cPanel, Inc. has released EasyApache 3.28.4 with PHP versions 5.4.38 and 5.5.22. This release addresses vulnerabilities related to CVE-2015-0235 and CVE-2015-0273 by fixing bugs in the Core module.

    As part of our regular maintenance, we will update all servers running cPanel to the latest PHP version. On servers running CloudLinux, this can lead to temporary 500 internal server errors until the updated files replicate across all CageFS containers of each hosting account. This can take anywhere between 1 minute and 15 minutes.

    Update 24.02.15 01:40 CET: All servers have been successfully updated. If you notice any issues, please contact our technical support department.

  • Date - 24.02.2015 01:00 - 24.02.2015 01:40
  • Last Updated - 24.02.2015 01:40
Kernel Update and Reboot (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • The VPS hypervisor Avalon will be updated to the newest kernel version. This requires a server reboot, which will result an outage of approximately 10 minutes.

    We're sorry for any inconvenience this may cause.

    Update 23.02.15 15:14 CET: The hypervisor and all VMs are back online. Thank you for your patience.

  • Date - 23.02.2015 15:10 - 23.02.2015 15:15
  • Last Updated - 23.02.2015 15:15
Scheduled Maintenance of Access Network (Resolved)
  • Priority - High
  • Affecting Other - myLoc Data Center - Access Network
  • As part of an upgrade of the Access Network within the area where our VPS hypervisors are located, the myLoc data center will perform planned maintenance work on 22 February 2015 between 02:00 AM and 04:00 AM CET.

    In the context of urgently needed maintenance, software updates and configuration changes of key network components will be performed, which can lead to multiple temporary disruptions or limitations of the network connections. The on-site data center technicians will strive to keep the downtime to a minimum.

    If necessary, please forward this information to your clients/customers. We apologize for any circumstances caused by the maintenance and the short notice and thank you very much for your understanding. Should you have any questions or concerns, please don't hesitate to contact us.

    Update 22.02.15 01:42 CET: The preparations for the upcoming maintenance work has begun.

    Update 22.02.15 02:32 CET: The maintenance is in progress. The connections will be disrupted.

    Update 22.02.15 02:50 CET: Most connections have been restored. We've only monitored an outage of approximately 15 minutes so far (for some servers even shorter). Maintenance is still ongoing.

    Update 22.02.15 02:58 CET: The first part of the maintenance has been successfully completed. The next parts will cause only a few short outages in particular network segments.

    Update 22.02.15 04:29 CET: The entire maintenance has been successfully completed. All servers and IPs should be online and operating properly.

  • Date - 22.02.2015 02:00 - 22.02.2015 04:29
  • Last Updated - 22.02.2015 08:12
Web Server Security Update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We're going to install a security update for the web server. The updating process might cause temporary 500 Internal Server Errors for some accounts/websites that have a specific set of htaccess rules. The error will disappear once all CageFS containers (for each account) will update with the latest files. This process can take between 2 and 15 minutes.

    Thank you for your patience and understanding.

    Update 03.02.15 04:55 CET: The update has to be repeated, unfortunately. It should be done in approximately 10 minutes.

    Update 03.02.15 05:05 CET: The update has been successfully installed and all CageFS containers have been updated.

  • Date - 03.02.2015 04:45 - 03.02.2015 05:05
  • Last Updated - 03.02.2015 05:08
Server Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • The server Neptune is going down for reboot shortly due to a network issue. The reboot process usually takes 5 minutes.

    We're sorry for any inconvenience this may cause.

    Update 03.02.15 04:11 CET: The server and all services are back online. Thank you for your understanding.

  • Date - 03.02.2015 04:07 - 03.02.2015 04:11
  • Last Updated - 03.02.2015 04:21
Reboot Following Patch of GHOST Vulnerability (Resolved)
  • Priority - Critical
  • Affecting System - All Servers
  • All servers are going to be rebooted within the next 12 hours, following the installation of a critical security patch. This will cause an outage of approximately 5 to 15 minutes for each server until all services are back online.

    The GHOST vulnerability (CVE-2015-0235) is a serious vulnerability that was found today in the glibc Linux library. This library is widely used by many Linux servers and software commonly used in the web hosting industry. The vulnerability has the potential to allow a remote attacker to take complete control of the system. Thus, the vulnerability must be patched immediately and cannot be postponed at all.

    All our servers use the CloudLinux and CentOS operating systems. As soon as an official patch will be released for these operating systems, we will apply it to all related servers and reboot them.

    Update 27.01.15 23:48 CET: CloudLinux has just released a patch for this vulnerability. We will now proceed installing it on all CloudLinux servers and reboot them.

    Update 28.01.15 00:15 CET: All CloudLinux servers have been patched and rebooted. Awaiting a patch for CentOS to get released.

    Update 28.01.15 02:55 CET: All CentOS hosting servers have been patched and rebooted.

    NOTE: Self-managed VPSs have not been patched since we don't have access to them. If you have a self-managed VPS, please update the glibc library and reboot your VPS immediately.

  • Date - 27.01.2015 23:40 - 28.01.2015 02:58
  • Last Updated - 28.01.2015 02:57
PHP Security Update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We are going to update the native PHP installation on this server. The update fixes various security bugs, thus we cannot postpone the update.

    During the update, some accounts/websites using the native PHP version might temporary return 500 Internal Server Error. This issue will auto-fix itself once the updated PHP files replicate across the CageFS containers of each account, which usually takes 2 to 15 minutes.

    Thank you for your understanding.

    Update 27.01.15 14:47 CET: The update process has been initiated.

    Update 27.01.15 15:05 CET: The PHP update has been completed. If you notice any issues that could be related to the PHP update, please don't hesitate to let us know.

  • Date - 27.01.2015 14:45 - 27.01.2015 15:05
  • Last Updated - 27.01.2015 15:05
Scheduled Reboot for VPS Hypervisor Avalon (Resolved)
  • Priority - High
  • Affecting Server - Avalon
  • As part of our regular maintenance, we are going to install the latest kernel and security patches on the VPS hypervisor Avalon.

    This process requires a server reboot, which will result in an outage of approximately 10 to 15 minutes. All VMs hosted on this machine will be unavailable during this time.

    Update 08.01.15 19:00 CET: The VPS hypervisor is going down for reboot now.

    Update 08.01.15 19:08 CET: The VPS hypervisor and all VMs are now back online. The maintenance is complete.

    Thank you for your understanding in regards to this matter.

  • Date - 08.01.2015 19:00 - 08.01.2015 19:08
  • Last Updated - 08.01.2015 19:08
Scheduled Reboot for VPS Hypervisor Sierra (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • As part of our regular maintenance, we are going to install the latest kernel and security patches on the VPS hypervisor Sierra.

    This process requires a server reboot, which will result in an outage of approximately 10 to 15 minutes. All VMs hosted on this machine will be unavailable during this time.

    Update 08.01.15 18:38 CET: The VPS hypervisor is going down for reboot now.

    Update 08.01.15 18:45 CET: The VPS hypervisor and all VMs are now back online. The maintenance is complete.

    Thank you for your understanding in regards to this matter.

  • Date - 08.01.2015 18:30 - 08.01.2015 18:45
  • Last Updated - 08.01.2015 18:50
PHP Security Update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We are going to update the native PHP installation on this server. The update fixes two security-related bugs, thus we cannot postpone it.

    During the update, some accounts/websites using the native PHP version might return 500 Internal Server Error. This issue will auto-fix once the updated PHP files replicate across the CageFS containers of each account, which usually takes 10 to 15 minutes.

    If you cannot wait and need to prevent the 500 error with immediate effect, please switch to an alternative PHP version under cPanel » Select PHP Version

    Thank you for your understanding.

    Update 23.12.14 13:02 CET: The update process has been initiated.

    Update 23.12.14 13:21 CET: Native PHP has been successfully updated. If you notice any issues that could be related to the PHP update, please don't hesitate to let us know.

  • Date - 23.12.2014 13:00 - 23.12.2014 13:22
  • Last Updated - 23.12.2014 13:22
Sierra Hypervisor and Related VMs Unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The VPS hypervisor Sierra is currently unreachable. We are investigating this issue and will post updates as soon as possible.

    Update 13.12.14 02:44 CET: A hard reboot of the server has been attempted, but it's still unreachable. Since netKVM mode doesn't work either and other servers within the same area are also unreachable, we presume this might be a network issue at the myLoc data center. An emergency support ticket has been already submitted.

    Update 13.12.14 03:42 CET: We have the confirmation that a data center technician is currently investigating this issue.

    Update 13.12.14 04:39 CET: The outage is caused due to an issue with the network switch that connects our server, as well as other nearby servers. The DC network engineers are already working on fixing the network switch and restoring service.

    Update 13.12.14 06:33 CET: We've just been informed that the DC technicians are still working at highest priority to fix the faulty network switch.

    Update 13.12.14 07:40 CET: The VPS hypervisor is now finally back online since a few minutes.

    We're really sorry for the inconvenience caused by this outage. Thank you for your patience and understanding.

  • Date - 30.11.2012 02:26 - 13.12.2014 07:42
  • Last Updated - 13.12.2014 07:46
Scheduled Updates and Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we are going to install some important updates on the server Neptune. Among these updates, the Kernel will be updated, which requires a server reboot.

    The server reboot is estimated to take 5 to 10 minutes. During this time, all websites and services will be temporary unavailable.

    We will keep our network status page updated with the latest progress. Thank you for your patience.

    Update 20.11.14 20:17 CET: LiteSpeed Web Server has been successfully updated. Proceeding with the Kernel update now.

    Update 20.11.14 20:45 CET: Kernel has been updated. The server is going down for reboot now and should be back online within 5 to 10 minutes.

    Update 20.11.14 20:50 CET: The server and all services are now back online. Proceeding with the PHP update, which is the final update.

    Update 20.11.14 21:02 CET: Some websites using the native PHP version return 500 Internal Server Error. This error should disappear once the updated PHP binaries are replicated across all accounts and CageFS containers. It should take a few minutes, please stand by.

    Update 20.11.14 21:15 CET: All updates have been successfully installed. We've randomly tested several websites on the server and everything seems to work properly so far.

    The maintenance is hereby complete. Thank you for your patience in regards to this matter. If you notice any issues, please contact us and we'll gladly help you.

  • Date - 20.11.2014 20:15 - 20.11.2014 21:18
  • Last Updated - 20.11.2014 21:18
London Cluster Unreachable (Resolved)
  • Priority - Low
  • Affecting System - London Cluster (DNS and SMX)
  • There is currently an unknown issue with the London Cluster (cluster1). We've already contacted the data center and await a response.

    This issue isn't critical, as it only affects 1 of 3 servers that serve the DNS and Secondary MX. These services are redundant and continue to work through the New York Cluster (cluster2) and local servers. This outage shouldn't have a noticeable impact.

    We will post updates once we have more information. Thank you.

    Update 11.11.14 17:15 CET: The London Cluster is back online. All related services are functioning properly.

  • Date - 11.11.2014 15:07 - 11.11.2014 17:41
  • Last Updated - 11.11.2014 17:41
Update to cPanel 11.46 - Server Neptune (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • The server "Neptune" will be updated from cPanel 11.44 to 11.46. Updates are usually seamless and we don't expect any issues to occur. However, some services may become temporary unavailable for a few minutes while cPanel updates them.

    The estimated duration of the entire update is 30 minutes. We'll keep you updated if we notice any issues.

    Update 08.11.14 18:40 CET: The cPanel update has been successfully completed. All services are online and work properly.

  • Date - 08.11.2014 18:15 - 08.11.2014 18:43
  • Last Updated - 08.11.2014 18:43
Planned Maintenance for Hypervisor Sierra (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • As part of our regular maintenance, the hypervisor "Sierra" will be rebooted on 08 November 2014 at 16:00 CET, along with all related VMs. The server reboot is required following a Kernel and minor OS upgrade, which also include important security patches.

    The reboot is estimated to take between 5 and 15 minutes. During this time, your VPS will be shortly unavailable until the hypervisor starts up. There is no action required from your side, other than to take note of this scheduled maintenance.

    We're sorry for any inconvenience this maintenance may cause. Thank you for your understanding.

    Update 08.11.2014 15:58 CET: The maintenance has been initiated. Updates are currently being installed.

    Update 08.11.2014 16:25 CET: Updates have been successfully installed. It took longer than expected because there were a lot of updates. The hypervisor and all VMs are now going down for reboot.

    Update 08.11.2014 16:30 CET: The hypervisor has rebooted. We're now starting up all VMs.

    Update 08.11.2014 16:35 CET: All VMs are now back online. The maintenance is complete.

    We appreciate your patience and understanding. Thank you!

  • Date - 08.11.2014 16:00 - 08.11.2014 16:32
  • Last Updated - 08.11.2014 16:42
Temporary 500 Error Due To Running Update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • There is currently a PHP security update in progress that seems to be causing a 500 Internal Server Error on a limited number of websites that use the native PHP version. The update is currently still running.

    This error should disappear within approximately 5 to 10 minutes, once the update completes and the updated files get replicated across all CageFS containers.

    We're sorry for any inconvenience this issue may cause. The update should be done shortly.

    Update 21.10.14 00:30 CEST: The update has finished, but unfortunately, the error still persists. We are currently investigating this issue and will try to fix it as soon as possible.

    Update 21.10.14 00:40 CEST: We were able to fix this issue by repeating the update process. All websites using the native PHP version should now funciton properly.

    If you notice any further issues, please don't hesitate to contact our technical support department. Thank you for your patience and understanding.

  • Date - 21.10.2014 00:19 - 21.10.2014 00:47
  • Last Updated - 21.10.2014 00:47
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • There is currently a network outage at the myLoc data center in Düsseldorf, affecting the area where the Neptune server is located. The on-site data center technicians are already investigating this issue and trying to bring the network back online.

    We will post updates on our Network Status page as soon as we receive more information from the data center.

    Meanwhile, you can also follow the status page of the myLoc data center at http://www.myloc-status.de

    Our sincere apologies for any inconvenience this outage may cause. Unfortunately, we have absolutely no influence on this outage and can only wait for the on-site technicians to fix this issue in a timely manner.

    Update 17.10.2014 15:54 CEST: The server was shortly back online for 1 minute, but now offline again. We hope that the DC techs will be able to find a permanent solution.

    Update 17.10.2014 16:24 CEST: Unfortunately, the network outage is still ongoing. The data center has confirmed that a statement with the root cause analysis will follow once the problem is resolved.

    Should we have any new information, we will update this page immediately. Thank you for your patience.

    Update 17.10.2014 16:35 CEST: The network is currently back online. We're awaiting a confirmation whether this issue has been resolved permanently.

    Update 17.10.2014 17:38 CEST: The data center has confirmed the permanent fix for the network issue. The root cause of this issue was a faulty configuration that was found and fixed by the on-site technicians. They will of course prevented this in the future through technical measures and process modifications.

    Once again, we're sorry for the inconvenience caused by this outage. Thank you again for your patience and understanding.

  • Date - 17.10.2014 15:20 - 17.10.2014 16:34
  • Last Updated - 17.10.2014 18:27
SSLv3 Disabled Due To Poodle Vulnerability (Resolved)
  • Priority - High
  • Affecting System - SSLv3 Cryptographic Protocol
  • On 14 Oct 2014, several news websites published speculative reports about a possible OpenSSL bug, affecting the SSLv3 cryptographic protocol.

    Due to the lack of information regarding this flaw, we have already taken precautions by implementing more secure SSL cipher suites and disabling the SSLv3 protocol on the Web Server, FTP and Email services. The TLS 1.0, 1.1 and 1.2 protocols will remain enabled to handle SSL connections further on.

    The only drawback of this security measure is that operating systems, browsers, email clients and FTP clients that don't support any TLS protocols will fail to establish a secure connection to our servers (e.g. via HTTPS). The most popular browser that doesn't support the TLS protocol is Internet Explorer on Windows XP and older versions which are very outdated and have reached their end of life many years ago. Intermediate and modern browsers/clients have support for TLS and should be able to establish a secure connection without issues.

    We will post more information once more information regarding this vulnerability surfaces. At this point, there's no precise information available. This could very well be only a client vulnerability, which wouldn't affect servers at all. Nevertheless, security has highest priority for us, which is why we undertake all possible precautions before the vulnerability gets publicly disclosed and exploited.

    Update 15.10.2014 00:50 CEST: The vulnerability has just been published by Google and has the codename "Poodle": http://googleonlinesecurity.blogspot.co.uk/2014/10/this-poodle-bites-exploiting-ssl-30.html

    According to Google, disabling the SSLv3 protocol mitigates this vulnerability, which we have already done before the vulnerability was disclosed. Our partners at CloudFlare have also disabled the SSLv3 protocol by default, but they leave an option for clients who prioritize broad browser support over the risk of this vulnerability to re-enable the SSLv3 protocol via their control panel.

    Considering that only 1,12% of the Windows XP users that were monitored through the CloudFlare network were using the old SSLv3 protocol, there should be no substantial reason to re-enable the SSLv3 protocol. Google Chrome and most probably all other major browsers will drop support for SSLv3 in favor of TLS 1.2 in the near future.

    If for some reason you still require SSLv3 support for your website, the only solution at this moment is to activate CloudFlare under cPanel. You can then enable SSLv3 support under the CloudFlare Security Settings. Please read the full blog post at CloudFlare for more information: https://blog.cloudflare.com/sslv3-support-disabled-by-default-due-to-vulnerability/

    Update 15.10.2014 13:00 CEST: We have sent an email to all clients and posted detailed information, along with some frequently asked questions, on our helpdesk at https://support.maxterhost.com/index.php?/News/NewsItem/View/27/sslv3-disabled-due-to-poodle-vulnerability

    Should you have any questions or concerns in regards to this vulnerability, please don't hesitate to contact us.

    Update 16.10.2014 00:41 CEST: LiteSpeed Web Server has been updated and includes a patched version of OpenSSL, which covers the "Poodle" vulnerability and other vulnerabilities. We have also updated our SSL cipher suites for better compatibility.

  • Date - 14.10.2014 00:00 - 15.10.2014 00:00
  • Last Updated - 17.10.2014 15:58
Scheduled Maintenance and Reboot - XSA-108 (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • A high-priority maintenance and server reboot are scheduled on 01 October 2014 between 18:00 and 18:15 CEST.

    A third party vulnerability was reported to us within the Xen Hyper-Visor that requires an immediate patch. This is not a MaxterHost issue - it is an issue that affects many Xen environments. More information regarding the Xen security advisory XSA-108 is available at http://xenbits.xen.org/xsa/advisory-108.html

    The patch requires us to update the Kernel on the VPS node (dom0) and reboot it along with all hosted virtual machines (VMs). We anticipate there will be approximately 10 to 15 minutes of downtime per VM.

    We apologize for any inconvenience this may cause and invite you to contact us if you have any questions or concerns.

    Update 01.10.2014 18:00 CEST: The kernel and patches are installing.

    Update 01.10.2014 18:10 CEST: The VPS node and all VMs are going down for reboot.

    Update 01.10.2014 18:13 CEST: Reboot is in progress.

    Update 01.10.2014 18:14 CEST: The VPS node is back online. We're now starting up all VMs.

    Update 01.10.2014 18:18 CEST: All VMs have been booted and should be back online by now.

    Thank you for your understanding in regards to this matter.

  • Date - 01.10.2014 18:00 - 01.10.2014 18:18
  • Last Updated - 01.10.2014 18:20
Sierra VPS Node Unreachable - Network Outage (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • We are aware of an outage that affects the Sierra VPS node and are currently investigating it. Updates will follow as soon as possible.

    Update 13.09.14 03:53 CEST: It seems to be a network issue that affects all servers at myLoc DUS2, within the area where our VPS node is located. An emergency ticket has already been sent to the data center and we're awaiting their reply.

    Update 13.09.14 04:19 CEST: The network and all servers are now back online.

    Update 13.09.14 07:04 CEST: Another network outage causes the VPS node to be unreachable. A new emergency ticket has been sent to the data center and we're waiting for them to investigate.

    Update 13.09.14 07:13 CEST: This issue has now been fixed, the network and all servers are back online.

    We are honestly sorry for the inconveninence caused by this outage, as well as all recent outages. The data center has been requested to find a long-term solution to prevent issues like these, as there have been too many outages within the past 30 days, usually affecting the same area.

    If you notice any further issues or would like to submit a complaint, please don't hesitate to contact us.

  • Date - 13.09.2014 03:45 - 13.09.2014 07:13
  • Last Updated - 13.09.2014 07:24
Scheduled Maintenance of Access Network (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • The data center will perform maintenance work on their Access Network on 05 September 2014, between 00:00 and 06:00 CEST, within the segment where the VPS node "Sierra" is located.

    The reason for the maintenance is a security-relevant firmware update. This will lead to a short network outage of approximately 3 minutes.

    Thank you for your understanding in regards to this matter.

    Update 05.09.14 00:58 CEST: The data center technicians have initiated the update process.

    Update 05.09.14 08:17 CEST: Athough the maintenance was announced as completed, the VPS node is unreachable since about 4 minutes. We have contacted the data center and will update this ticket as soon as we have more information.

    Update 05.09.14 08:34 CEST: The VPS node is back online. The outage was cased by a technical issue that affected part of the data center's network infrastructure (within the area DUS2). We're truly sorry for the inconvenience this maintenance has caused. The maintenance should be completely done now.

  • Date - 05.09.2014 00:00 - 05.09.2014 08:34
  • Last Updated - 05.09.2014 08:58
Sierra VPS Node Unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The VPS node Sierra is currently unreachable. We are investigating this issue and will update you as soon as possible.

    We're sorry for any inconvenience caused by this outage.

    Update 16.08.14 17:33 CEST: We have hard-rebooted the server and contacted the data center.

    Update 16.08.14 17:37 CEST: Apparently this issue is caused by a network outage, as we are unable to ping some nearby servers. We're awaiting the data center to update us with more information and fix this issue.

    Update 16.08.14 18:57 CEST: The CEO of the data center has confirmed that their technicians are already investigating this issue and trying to fix it soonest possible.

    Update 16.08.14 20:21 CEST: The technicians are still working on this. Multiple server areas within the data center are affected.

    Update 16.08.14 20:27 CEST: Our server should be back online within 60 minutes. We kindly ask for more patience.

    Update 16.08.14 20:42 CEST: The VPS node and all VMs are finally back online since 1 minute. According to the data center, the outage was caused by a failure of the Power-Reset-System of several servers. The defective components had to be replaced individually for each server, which is the reason why the outage was so long.

    Once again, we're really sorry for the trouble caused and would like to thank you for your patience and understanding in this matter.

    If you notice any further issues, please don't hesitate to let us know.

  • Date - 16.08.2014 17:26 - 16.08.2014 20:41
  • Last Updated - 16.08.2014 21:02
Scheduled Server Transfer (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our ongoing efforts to improve our hosting service, maintain reliability and increase performance, all accounts and data from the server “Neptune” will be moved to a brand new server. The new server is equipped with a new generation processor, double amount of RAM, and – most importantly – Solid State Drives (SSD). These can boost the speed of your website and performance by up to 300%. It should also allow us to increase the resource limits of all accounts by up to 100%.

    The server transfer is scheduled to start on 30 July 2014 at 20:00 CEST and is estimated to complete by 31 July 2014 at 08:00 CEST.

    The transfer should not require any action from your side, as we will try to do it as seamless as possible. Once we transfer all data to the new server, the data center technicians will route the existing IP addresses to our new server.

    There will be no change of nameservers or IP addresses necessary since the new server will be located at the same data center (myLoc in Düsseldorf).

    Although there shouldn’t be any downtime, we kindly ask you not to make any changes to your hosting account (incl. websites, emails, settings, databases, etc.) while the transfer is in progress. Once your account is transferred to the new server, all changes that take place on the old server will not be replicated.

    IMPORTANT NOTE: To make sure that you access the correct server in case your account was already transferred, please use the following URL formats to access cPanel or WHM:

    cPanel: http://yourdomain.com/cpanel
    WHM: http://yourdomain.com/whm
    Replace yourdomain.com with your own domain.


    Update 30.07.2014 19:55 CEST: We are now initiating the server transfer and will move all accounts/data to the new server. Please try not to make any changes until we are done with the transfer, unless absolutely necessary. Regular status updates will be posted every 2 hours. Thank you for your patience and understanding.

    Update 30.07.2014 21:42 CEST: Transfer progress: 10%

    Update 30.07.2014 23:47 CEST: Transfer progress: 15%. All accounts with dedicated IPs have been transferred and already point to the new server.

    Update 31.07.2014 00:31 CEST: Transfer progress: 20%.

    Update 31.07.2014 01:05 CEST: Transfer progress: 25%.

    Update 31.07.2014 02:00 CEST: Transfer progress: 30%.

    Update 31.07.2014 03:20 CEST: Transfer progress: 37%.

    Update 31.07.2014 05:18 CEST: Transfer progress: 43%.

    Update 31.07.2014 06:00 CEST: Transfer progress: 44%.

    Unfortunately, the transfer isn't going as fast as expected due to factors out of our control (server load, network, running tasks, etc.). It will most probably take another 8 hours from now. So far 180 GB of 395 GB have been transferred. Only accounts that point to the IPs 5.104.105.135, 5.104.105.178 and 5.104.105.184 are left. All other accounts pointing to other IPs have been successfully transferred to the new server and already run on it.

    If your account points to the IPs mentioned above, please try not to make any changes to them, as these accounts have already been archived by the server and are pending transfer. We can re-transfer accounts if necessary, but this is not recommended.

    We'll keep you updated. Thank you for your patience.

    Update 31.07.2014 07:27 CEST: Transfer progress: 52%.

    Update 31.07.2014 08:44 CEST: Transfer progress: 64%.

    Update 31.07.2014 09:35 CEST: Transfer progress: 70%.

    Update 31.07.2014 10:19 CEST: Transfer progress: 73%. In about 60 minutes we will point the shared IP 5.104.105.135 to the new server, as almost all accounts assigned to this IP have been transferred.

    Update 31.07.2014 10:46 CEST: Transfer progress: 78%.

    Update 31.07.2014 11:15 CEST: Transfer progress: 83%. All accounts that point to the IP 5.104.105.135 have been successfully transferred. They are now directed to the new server.

    Update 31.07.2014 11:36 CEST: Transfer progress: 85%.

    Update 31.07.2014 12:03 CEST: Transfer progress: 90%.

    Update 31.07.2014 12:27 CEST: Transfer progress: 94%.

    Update 31.07.2014 13:07 CEST: Transfer progress: 98%.

    The last 2% might take about 3 hours to complete, as we're transferring the final, largest account on the server. All other accounts have been transferred and point to the new server.

    Update 31.07.2014 20:09 CEST: All accounts have been successfully transferred a few hours ago already and we believe that everything should work correctly by now.

    We'd like to thank everyone for the patience and understanding. We hope this upgrade was worth it and will benefit your website and online business.

    As always, if you notice any issues, please don't hesitate to contact us. Thank you!

  • Date - 30.07.2014 20:00 - 31.07.2014 20:07
  • Last Updated - 31.07.2014 20:10
HTTP Service Unavailable (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • We are currently aware of an issue that affects all websites (HTTP service). We are already working on resolving this issue as soon as possible.

    Our sincere apologies for the inconvenience caused.

    Update 31.07.14 14:38 CEST: All websites are back online. It was caused by some work that is being done after the transfer, as we need to iron out some issues.

    All work should be done by tomorrow at the latest. Our services will return to normal afterwards. Thank you for your patience and understanding in this matter.

  • Date - 31.07.2014 14:34 - 31.07.2014 14:39
  • Last Updated - 31.07.2014 14:39
Scheduled Maintenance and Server Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • As the final preparation for the upcoming server transfer, we are going to install the latest kernel and software updates on the current Neptune server.

    This process requires a server reboot. It should take about 5 to 10 minutes until the server and all services are back online.

    Thank you for your understanding.

    Update 29.07.14 22:59 CEST: Updates installed. The server is going down for reboot now.

    Update 29.07.14 23:05 CEST: Server rebooted. The services are starting up.

    Update 29.07.14 23:10 CEST: All websites and services are back online. This task is complete.

  • Date - 29.07.2014 23:00 - 29.07.2014 23:10
  • Last Updated - 29.07.2014 23:10
Data Packet Loss due to Reflection Attack (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • Due to a network issue, the area where the Neptune server is located has a loss of data packets between 5% and 15%. This may cause some websites to load a bit slower than usual, or time-out in some very rare cases.

    The data center technicians are already investigating this issue and will try to fix it in a timely manner. We kindly ask for your patience.

    22.07.2014 13:15 CEST: The packet loss is caused by high amounts of outgoing data packets. We are still investigating this issue.

    22.07.2014 14:23 CEST: It turns out that this was a Reflection Attack. The data center was able to mitigate the attack and the problem is now resolved.

    Thank you for your understanding in regards to this matter. If you notice further issues, please let us know.

  • Date - 22.07.2014 11:15 - 22.07.2014 14:20
  • Last Updated - 22.07.2014 14:40
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Sierra
  • As part of our regular maintenance, we are going to install the latest kernel and security patches on the VPS node Sierra.

    This process requires a server reboot, which will result in an outage of approximately 10 to 15 minutes.

    Update 20.06.14 19:15 CEST: The server and all VMs are going down for reboot now.

    Update 20.06.14 19:24 CEST: The hypervisor has booted up with the new kernel. The VMs will be back online in about 2 to 5 minutes.

    Update 20.06.14 19:29 CEST: All VMs are back online. The maintenance is complete.

    Thank you for your understanding in regards to this matter.

  • Date - 20.06.2014 19:15 - 20.06.2014 19:29
  • Last Updated - 20.06.2014 19:43
Scheduled Server Reboot (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • The server Neptune is going down for reboot due to a hanging task that we cannot close or restart. The server should be back online within 5 to 10 minutes.

    Thank you for your patience.

    Update 13.06.14 23:34 CEST: The server was rebooted and all services are currently starting up.

    Update 13.06.14 23:39 CEST: All services and websites are back online. The server is fully operational.

  • Date - 13.06.2014 23:30 - 13.06.2014 23:41
  • Last Updated - 13.06.2014 23:44
Limited Service and Support due to Hurricane (Resolved)
  • Priority - Medium
  • Affecting Other - MaxterHost
  • Due to a hurricane in Gelsenkirchen (main office location), Düsseldorf (servers location) and nearby cities, our response times could be longer than usual. Although no issues have been reported by the data center so far and all servers are running properly, outages can't be excluded yet since Düsseldorf (among other cities) was seriously affected by the hurricane. Several cabels were cut down by falling trees. We have technicians located in the USA that will try to resolve any eventual issues, in case something happens in the meantime.

    We will try to inform you as soon as possible if there are any issues that affect our service. Your patience and understanding in regards to this matter would be greatly appreciated.

    Update 10.06.14 18:00 CEST: There were no issues that affected our servers or the data center so far. Another storm or hurricane was announced for tonight, but hopefully it will not have the same intensity as last night. We'll try to keep you updated if anything happens at all.

    Update 11.06.14 19:00 CEST: There are no further weather alerts and everything should be almost back to normal. There were no outages or damages caused to our servers by this event.

  • Date - 10.06.2014 00:00 - 11.06.2014 19:00
  • Last Updated - 11.06.2014 19:00
Scheduled Server Reboot - Kernel Update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We will update to a new kernel to fix the local vulnerability CVE-2014-3153. This vulnerability has been rated to be high-risk, which is why we are upgrading immediately.

    Furthermore, we will replace Ksplice with KernelCare since kernel patches are released more promptly by KernelCare.

    This process requires a server reboot, which usually takes between 5 and 10 minutes.

    Update 08.06.14 07:30 CEST: The new kernel and all latest operating system updates have been installed. The server is going down for reboot now and should be back online in about 10 minutes.

    Update 08.06.14 07:34 CEST: The server has been successfully rebooted. All services are currently starting up.

    Update 08.06.14 07:40 CEST: All services are back online. Thank you for your patience.

  • Date - 08.06.2014 07:30 - 08.06.2014 07:42
  • Last Updated - 08.06.2014 07:42
PHP crashes on some accounts after update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We are currently aware of an issue that affects a limited amount of websites following a PHP update.

    Our technicians are working to fix this issue as soon as possible. Thank you for your patience in this matter.

    Update 07.06.14 23:59 CEST: PHP works fine again for all accounts. This issue was caused because the php.ini file wasn't updated yet across all CageFS containers. It is now resolved.

  • Date - 07.06.2014 23:52 - 07.06.2014 23:59
  • Last Updated - 08.06.2014 01:09
Outage at myLoc Data Center (DUS3) (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • It appears that there is a partial outage at the myLoc data center, affecting the server Neptune. We will try to reach the data center and will post updates as soon as we have more information.

    We're sorry for the inconvenience caused by this.

    Update 21.05.14 18:30 CEST: The data center's helpdesk is also offline and the phone lines are busy. Considering that this is also affecting their own services, we assume that the on-site technicians are already working on fixing the problem. Therefore, we kindly ask for your patience.

    Update 21.05.14 19:08 CEST: Our hardware firewall is back online. The server should also be back online soon.

    Update 21.05.14 19:15 CEST: The server is back online and fully operational. Thank you for your patience.

  • Date - 21.05.2014 18:23 - 21.05.2014 19:19
  • Last Updated - 25.05.2014 11:51
Emergency Network Maintenance (Resolved)
  • Priority - High
  • Affecting System - myLoc Data Center - Core Routers
  • The myLoc data center has recently detected isolated disturbances in the network routing. This causes the traffic to be routed incorrectly and reach the incorrect target. Following initial analysis, this issue is caused by the active core router in Düsseldorf.

    The data center has opened a ticket with the manufacturer of their core router and will work together with their technicians to fix this issue.

    During this process, the data center may be required to switch to stand-by core routers while they fix the main core router. This would cause the internet connection to be shortly unavailable. If this step is required, they will try to complete it before 06:00 CEST.

    We're sorry for the short-term notice, but this issue was detected just a few hours ago at 01:00 CEST. Thank you for your understanding.

    Update 17.05.14 06:05 CEST: The data center will switch to their stand-by routers, which will cause an outage of their internet connections. We'll let you know once we have more updates.

    Update 17.05.14 06:12 CEST: All servers are back online after just 7 minutes, but the maintenance isn't over yet. Another short outage could follow while switching back to the main core routers once they're fixed.

    Update 17.05.14 06:24 CEST: The maintenance work is now completed.

  • Date - 17.05.2014 01:00 - 17.05.2014 06:24
  • Last Updated - 17.05.2014 09:45
VPS Node Sierra Unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The VPS node Sierra is currently unreachable. We are already investigating this issue and will update you shortly.

    Update 01.05.14 07:18 CEST: We have tried to hard-reboot the server and start the netKVM console, but the server is still unreachable. An emergency ticket at the data center has been opened.

    Update 01.05.14 07:33 CEST: The server is now online again and the VMs are booting up. Apparently it was a network issue, but we're still awaiting a reply from the data center.

    Update 01.05.14 07:40 CEST: All VMs are back online. The outage was caused by a network fault at the data center in our server's area.

    We're sorry for any inconvenience this outage may have caused. Thank you for your understanding.

  • Date - 01.05.2014 07:05 - 01.05.2014 07:53
  • Last Updated - 01.05.2014 07:53
Scheduled Maintenance - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Sierra
  • As part of our regular maintenance, we are going to update the kernel and all remaining OS packages on the VPS node Sierra. During this process we will need to shutdown all VMs and reboot the node, which will cause a planned outage of approximately 5 to 15 minutes.

    We're sorry for any inconvenience this may cause.

    Update 11.04.14 20:15 CEST: The node and all VMs are going down for reboot now. Please stand by.

    Update 11.04.14 20:29 CEST: This process is complete. All VMs are back online.

    Thank you for your patience.

  • Date - 11.04.2014 20:10 - 11.04.2014 20:30
  • Last Updated - 11.04.2014 20:30
Power Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center
  • There currently seems to be a network issue at the myLoc data center, affecting all our servers. Their entire network is offline and the phone lines at the data center are busy, therefore, we are unable to contact them to gather more information about this outage.

    We will post here more updates as soon as we find out more.

    Official status page of myLoc Data Center: https://myloc-status.de/en/

    Update 03.03.2014 15:25 CET: The data center has confirmed the outage. It is a problem with their entire power supply and they are already working on resolving the issue as soon as possible.

    Update 03.03.2014 15:30 CET: Some servers are back online. We hope that the rest of them will follow soon.

    Update 03.03.2014 15:56 CET: Only the virtual servers have been restored so far. We're still waiting for the shared/reseller server to come back online.

    Update 03.03.2014 16:17 CET: The power outage is ongoing. Although we certainly wish we could do more, at this point we can only wait for the data center to fix their power generators.

    Update 03.03.2014 16:46 CET: Our IPMI device is back online, but the server and hardware firewall aren't yet online. This could mean that although the server is powered on, the power supplies of some core routers are still off.

    Update 03.03.2014 18:10 CET: We have received an update that a technician is currently working in the area where the Neptune server is located.

    Update 03.03.2014 19:11 CET: No more updates so far and their phone lines are busy. We'll keep trying to reach them again and file a complaint.

    Update 03.03.2014 19:31 CET: We were finally able to reach a manager, who said that most of their network is back online and now they're working on each single server that is still offline. They should get to our server soon.

    Update 03.03.2014 20:14 CET: Despite the myLoc status page stating that their power is back on, there are still several servers in their "Premium" area (including ours) that are still awaiting a fix. We have a ticket opened with them since the beginning and await an update.

    Update 03.03.2014 21:00 CET: At this point we still have no concrete updates from the data center, but they've assured us that they're actively working to bring all systems back online. Since this issue doesn't depend on us at all, we are unable to provide an estimated time when our server will be online again, but we definitely hope that these issues will be fixed by tomorrow morning at the latest.

    Update 03.03.2014 23:10 CET: The hardware firewall and all servers are finally back online!

    Although we haven't received any response from the data center yet, we assume that this issue has been fixed permanently. The official statement from the data center will be sent to all clients later this week, once they're done with their investigation.

    Once again, we're extremely sorry for the inconvenience caused by the outage. We'll follow what measures the data center will undertake to prevent this kind of issue in the future and move to a different data center if necessary.

    Update 03.03.2014 23:20 CET: According to the data center staff, the power outage was in Düsseldorf-Rath and the entire district was left without power. Although the data center's diesel power generators and uninterruptible power supplies were normally ready for operation, they didn't take over due to a yet unknown reason. They will investigate this issue with the manufacturers and provide an official statement tomorrow, which we'll then forward to all clients.

  • Date - 03.03.2014 14:54 - 03.03.2014 23:15
  • Last Updated - 03.03.2014 23:33
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we are going to update the Kernel and all related operating system updates on the server Neptune. This process requires us to reboot the server, which will cause a short outage of about 5 to 15 minutes.

    Update 22.02.14 22:21 CET: The kernel and all updates are currently being installed.

    Update 22.02.14 22:25 CET: The server is going down for reboot. Please stand by, it should be back online shortly.

    Update 22.02.14 22:28 CET: The operating system has successfully booted with the new kernel. All services are currently being started.

    Update 22.02.14 22:37 CET: There are a lot of user crons running concurrently, which is slowing down the server significantly and delays start-up of some services.

    Update 22.02.14 22:41 CET: All services and websites are back online. The server load is dropping within normal ranges now that all services have started.

    Thank you for your patience and understanding. If you notice any issues, please let us know.

  • Date - 22.02.2014 22:20 - 22.02.2014 22:43
  • Last Updated - 22.02.2014 22:44
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Sierra
  • As part of our regular maintenance, we will update the Kernel on this VPS node.

    This process requires us to shutdown all VMs and reboot the node. This process is going to take approximately 5 to 10 minutes.

    Update 22.02.14 21:51 CET: The kernel and all operating system updates are being installed.

    Update 22.02.14 22:02 CET: All VMs are going down for reboot. Please stand by.

    Update 22.02.14 22:10 CET: The node has been successfully rebooted. Most VMs are already online.

    Update 22.02.14 22:15 CET: All VMs are up and running again.

    Thank you for your patience and understanding. If you notice any issues, please let us know.

  • Date - 22.02.2014 21:55 - 22.02.2014 22:20
  • Last Updated - 22.02.2014 22:44
Scheduled Network Maintenance at myLoc (Resolved)
  • Priority - Medium
  • Affecting Other - myLoc Data Center
  • On 18.02.2014 between 05:30 and 06:00 CET, the myLoc Düsseldorf data center will conduct a scheduled maintenance of their network infrastructure.

    During this time there will be a short outage of the network connectivity, which affects only our shared/reseller hosting servers.

    We're sorry for any inconvenience this may cause. If you have any questions or concerns, please feel free to contact us.

  • Date - 18.02.2014 05:30 - 18.02.2014 10:28
  • Last Updated - 12.02.2014 15:31
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Sierra
  • Due to some issues with the last kernel that we've updated to recently, we are going to install a newer kernel that was released today.

    The kernel update requires a server reboot. All VMs will be shutdown during this process. We estimate it to take between 10 to 15 minutes.

    30.01.14 19:15 CET: The node and all VMs are going down for reboot now.

    30.01.14 19:23 CET: The node was rebooted and all VMs are back online. We're going to monitor this node closely for the following days to make sure that there aren't any issues with the kernel this time.

    Thank you for your patience and understanding.

  • Date - 30.01.2014 19:15 - 30.01.2014 19:24
  • Last Updated - 30.01.2014 19:27
Scheduled Reboot - Xen Configuration Update (Resolved)
  • Priority - Medium
  • Affecting Server - Sierra
  • In order to accomodate with some I/O intensive VMs and prevent the dom0 from running out of CPU, we are going to pin 1 CPU core to the dom0. When dom0 has a dedicated core, there are less CPU context switches to do, giving slightly better performance for all VMs.

    This change needs to be done to the kernel and requires a server reboot. All VMs will be shutdown for approximately 5 to 10 minutes.

    We're sorry for any inconvenience this may cause.

    Update 25.01.14 19:21 CET: The node and VMs are going down for reboot now.

    Update 25.01.14 19:33 CET: The node has been rebooted. All VMs are booting back up and will be online immediately.

    Update 25.01.14 19:35 CET: All VMs are back online. Thank you for your patience and understanding.

  • Date - 25.01.2014 19:20 - 25.01.2014 19:39
  • Last Updated - 25.01.2014 19:39
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we are going to update the kernel of this server, which requires a server reboot.

    The estimated time for the reboot process is approximately 5 to 10 minutes. During this time, the server and all websites will be unavailable.

    We're sorry for any inconvenience this may cause.

    Update 22.01.2014 22:58 CET: The server is going down for reboot now. It should be back online very soon.

    Update 22.01.2014 23:03 CET: The server has been rebooted. The operating system and all services are currently starting up.

    Update 22.01.2014 23:09 CET: All services and websites are back online. If you notice any issues, please let us know. Thank you for your patience.

  • Date - 22.01.2014 23:00 - 22.01.2014 23:10
  • Last Updated - 22.01.2014 23:10
Scheduled Reboot to Apply Xen Configuration (Resolved)
  • Priority - Medium
  • Affecting Server - Sierra
  • In order to increase the Xen memory limit of the dom0 (the initial domain started by Xen on boot), we must reboot the VPS node so the new settings can apply. All VMs (Virtual Machines) will need to be shutdown and then the node will be rebooted.

    The estimated time to reboot is between 10 and 15 minutes. During this time all VMs will be offline.

    We apologize for any inconvenience this may cause. There are no further scheduled reboots for this month.

    Update 22.01.14 22:25 CET: The server is going down for reboot now. All VMs are currently being shutdown.

    Update 22.01.14 22:34 CET: The VPS node has been successfully rebooted. All VMs are starting up now.

    Update 22.01.14 22:40 CET: All VMs are back online now. Thank you for your patience and understanding.

  • Date - 22.01.2014 22:25 - 22.01.2014 22:40
  • Last Updated - 22.01.2014 22:40
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • As part of our regular maintenance, we are going to update the kernel on the VPS node Sierra, which requires a server reboot.

    Xen will shutdown all VMs (Virtual Machines) and then the server will be rebooted. The shutdown process takes about 10 minutes and the booting process about 5 minutes.

    We're sorry for any inconvenience. Thank you for your patience and understanding.

    Update 18.01.14 20:17 CET: The system is going down for reboot now. All VMs are currently being shutdown. Please stand by.

    Update 18.01.14 20:33 CET: The VPS node has been rebooted. All VMs are currently booting back up.

    Update 18.01.14 20:35 CET: All VMs are back online. Thank you for your patience.

  • Date - 18.01.2014 20:15 - 18.01.2014 20:45
  • Last Updated - 18.01.2014 20:37
Scheduled Maintenance of DE-CIX IXP (Resolved)
  • Priority - Low
  • Affecting Other - myLoc Data Center Düsseldorf - DE-CIX IXP
  • The internet exchange point DE-CIX has announced a scheduled maintenance on a core switch in Düsseldorf for the specified period (DE-CIX ticket 104494).

    The myLoc data center (where all our client servers are located) will therefore interrupt the connection to DE-CIX Frankfurt for about 1 hour. No impact on the network is expected during this period, as all traffic will be routed over their other carriers and exchange points. The NOC will monitor the switches and the operation.

  • Date - 15.01.2014 04:00 - 15.01.2014 06:00
  • Last Updated - 17.01.2014 14:24
Scheduled Maintenance of TeliaSonera IXP (Resolved)
  • Priority - Low
  • Affecting Other - myLoc Data Center Düsseldorf - TeliaSonera IXP
  • The internet exchange point TeliaSonera has announced a scheduled maintenance on a core switch in Düsseldorf for the specified period (TeliaSonera Ticket PWIC45403).

    The myLoc data center (where all our client servers are located) will therefore interrupt the connection to TeliaSonera for about 2 hours. No impact on the network is expected during this period, as all traffic will be routed over their other carriers and exchange points. The NOC will monitor the switches and the operation.

    Update 14.01.14 07:30: There appears to be a general issue with the TeliaSonera route (not just our data center), as we are unable to access some external servers/websites (outside of our data center) that are routed through TeliaSonera. If you're having difficulties accessing our server or a website, please run a trace-route to check if your ISP is routing you through TeliaSonera. We don't have any other information at the moment since we can't contact TeliaSonera directly. If we receive any reports from our clients, we'll get in touch with the data center.

    Update 14.01.14 15:19: The routing issues have been confirmed by TeliaSonera. The following issues were open:

    "Our third line NOC, has discovered an interface between Amsterdam and Düsseldorf that was not forwarding traffic in one direction. The interface is now disabled. "

    "Currently we have an issue with the routing engine on router ddf-b2 where you service is connected.
    This routing engine is overheated."

    All routing issues have been already resolved.

  • Date - 14.01.2014 02:00 - 14.01.2014 15:30
  • Last Updated - 17.01.2014 14:24
Temporary HTTP outages due to resource abuse (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The HTTP service (web server) was unresponsive for approximately 20 minutes within the past 24 hours. Apparently this was caused by one or more abusive accounts that have overloaded the server.

    All services are running properly at the moment. We will monitor the server closely for a while, review the resource usage of some accounts and try to suspend resource-intensive accounts before they cause another resource spike.

    Update 13.01.14 21:30 CET: We have detected the account that has overloaded the server exactly during the periods when the HTTP server has crashed. This account was receiving an excessive amount of traffic from fake/malicious bots. Most of the bots have been blocked through a set of htaccess rules. We're still working with the client to find a permanent solution. Should this account cause another resource spike, we will suspend it immediately to avoid further outages.

    Update 14.01.14 20:00 CET: The resource-abusive account has been suspended this morning at 9:00. No further outages or performance issues were detected since then. If you notice any further issues, kindly let us know.

    We're sorry for the outages caused by the abused/abusive account.

  • Date - 13.01.2014 11:30 - 14.01.2014 19:58
  • Last Updated - 14.01.2014 19:58
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Vienna
  • The VPS node Vienna and all virtual machines hosted on it are going down for reboot following a kernel update. This process should take between 10 and 15 minutes.

    Our apologies for any inconvenience this may caused.

    Update 08.12.13 21:05: The VPS is back online after the reboot. All VMs should boot up within 5 minutes. Thank you for your patience!

  • Date - 08.12.2013 20:57 - 08.12.2013 21:05
  • Last Updated - 08.12.2013 21:05
Packet loss from some locations (Resolved)
  • Priority - Medium
  • Affecting System - myLoc Data Center (shared/reseller hosting only)
  • One of our clients has reported packet losses between 0.1% and 30% from Houston (TX, USA) and from Dubai (UAE). During our investigations, we could occasionally see packet losses of up to 10% from London (UK), Amsterdam (NL), Lafayette (IL, USA) and Nuremberg (DE). Other locations, such as Gelsenkirchen (DE) and New York (NY, USA), don't seem to be affected. We had no further locations to test from, so there could be more locations affected (or not affected).

    This is not caused by our server and it is a routing issue. We have already reported it to the data center, as it appears to be caused either by their network, or by their backbone providers. Since it's out of our control, we can only wait for the network engineers at the data center to provide us more information and fix this issue.

    If you're also experiencing packet losses, please open a support ticket and include a traceroute to the server's IP and an MTR report (My Trace Route). The MTR is required by the data center.
    On Windows, you should run WinMTR for at least 15 minutes and deactivate the name resolution (Options -> untick Resolve names).
    On Mac OS or Linux, you should run this command: mtr -n --raw --report -c IPADDRESS
    Replace IPADDRESS with the IP address of our server. The package mtr needs to be installed.

    Update 03.12.13 13:00: The data center is still investigating this issue.

    Update 03.12.13 14:30: It turns out that the packets were lost before the route entered the data center or even Germany. The network engineers at the data center have investigated this issue extensively and were able to rule out any issue with their network - internally and externally.

    Everything seems to be fine on our side, so if you're having similar routing issues, please contact your ISP (Internet Service Provider) or check if there's anything on your network that could interfere with the network traffic or restrict it. If everything else fails, please send us a traceroute and MTR report.

  • Date - 03.12.2013 08:00 - 03.12.2013 14:30
  • Last Updated - 03.12.2013 15:04
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • As part of the regular maintenance, we are going to install the latest software updates on the server Neptune. The MySQL Governor, CloudLinux and the Kernel will be updated.

    The kernel update requires a server reboot, which usually takes between 5 and 10 minutes.

    Thank you for your patience and understanding.

    Update 30.11.13 23:35: All updates have been successfully installed, the server was rebooted and all services are running fine.

  • Date - 30.11.2013 23:24 - 30.11.2013 23:41
  • Last Updated - 30.11.2013 23:41
Xen and Kernel Update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We are going to install the latest stable Xen and Kernel versions on the VPS node Vienna. The updates cover some security vulnerabilities, so we cannot postpone this update to a later date.

    A server reboot is required, as part of the Kernel update. This will a downtime of approximately 5 to 15 minutes until the node and all VMs reboot.

    Thank you in advance for your patience and understanding.

    Update 30.11.13 16:44: All updates have been installed. The server is now shutting down all VMs and will then reboot.

    Update 30.11.13 16:59: The VPS node and all VMs have been successfully rebooted. The system is now up-to-date and running properly.

  • Date - 30.11.2013 16:35 - 30.11.2013 17:02
  • Last Updated - 30.11.2013 17:02
Kernel Update - Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • We will install a new kernel and update all software on the VPS node Sierra. This process requires a server reboot. The estimated downtime is 5 to 10 minutes.

    Thank you for your patience and understanding.

    Update 25.11.13 05:57: The server won't reboot after the kernel update. We're investigating this issue right now and will update you as soon as possible.

    Update 25.11.13 06:09: We were able to identify the problem and fix it. The node is now back online and all VMs are booting up now. Sorry for any inconveniece.

    If you notice any issues, please let us know. Thank you!

  • Date - 25.11.2013 05:38 - 25.11.2013 06:12
  • Last Updated - 25.11.2013 06:09
MySQL Update (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • A minor update of the MySQL server is currently being installed. The service should be back in just a few minutes.

    Thank you for your patience.

    Update 08.11.2013 15:49: The MySQL server has been successfully updated.

  • Date - 08.11.2013 15:47 - 08.11.2013 15:50
  • Last Updated - 08.11.2013 16:10
Scheduled Reboot - Abnormal Disk Space Usage (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • An unusual disk space usage was noticed on this server. There is an excess of about 230 GB (and slowly increasing) that doesn't show up in any folder.

    There is still 27% of free space left, so this isn't a critical issue yet, but it must be fixed before it gets out of hand.

    The cause could be some recently updated operating system packages that are related to the hard drives and RAID system.

    Update 07.11.13 17:40 CET: We were unable to track the excessive disk space in any folders. Either the updated OS packages are causing this issue, or a large amount of deleted files are still being referrenced by a running process. In both cases, a server reboot is required.

    The server is scheduled for reboot today at 20:45 CET. It usually takes between 5 and 10 minutes for the server to reboot.

    Update 07.11.13 20:47 CET: The server has now been rebooted. It should be back online in a few minutes.

    Update 07.11.13 20:53 CET: The server is back online and the disk space usage is now correct. The services are still starting up at the moment, but should be done in less than 5 minutes.

    Update 07.11.13 21:03 CET: All services are up and the server works smoothly so far. We will continue to keep a close eye on it for a few days. If you experience any issues, please let us know.

    Thank you!

  • Date - 07.11.2013 15:00 - 07.11.2013 21:04
  • Last Updated - 08.11.2013 10:18
Installation of CloudLinux PHP Selector (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • As recently announced, we will install the CloudLinux PHP Selector, which will allow any cPanel account to switch between different PHP versions (the 5.3, 5.4 and 5.5 series) and build PHP based on your requirements.

    This process may cause some very short outages (just a few seconds each) while we recompile PHP and configure the web server. We will also need to go through all custom php.ini files and manually set the custom settings for each account. Custom php.ini files will no longer work, as you'll be able to configure PHP through the cPanel interface.

    Update 01.11.13 01:46: The PHP Selector has been successfully installed. We will now analyze all custom php.ini files and allow access to the required settings. Then we will set the custom values for each account.

    Update 01.11.13 04:30: All custom php.ini settings have been set. We've checked several websites and everything seems to run properly. No errors were detected so far. If you notice anything, please let us know.

    Thank you!

  • Date - 01.11.2013 01:00 - 01.11.2013 06:01
  • Last Updated - 01.11.2013 06:01
VPS Node Sierra unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The VPS node Sierra is unreachable since about 10 minutes. We're currently looking into this issue and will update you as soon as possible.

    We're sorry for any inconvenience this may cause.

    Update 29.10.2013 21:04: We're unable to start the server in recovery or netKVM mode, which usually happens when there's a network issue at the data center. The data center has been already informed and we're awaiting their reply.

    Update 29.10.2013 21:17: The data center hasn't replied yet, but this was most definitely another network issue. All VMs are booting up now because we've hard-resetted the server.

    We'll need to talk to the data center regarding these frequent network issues, as this is definitely a mediocre quality of service which affects most VPS clients. We're sorry that you've experienced 2 outages in just 1 week.

    Update 29.10.2013 21:33: A reply from the data center just came in. The reason for the outage was a technical failure of a network router, which affected a few servers, including ours. We've been told that this issue has been permanently fixed.

  • Date - 29.10.2013 20:53 - 29.10.2013 21:21
  • Last Updated - 29.10.2013 21:36
Unstable Network Connection - Loss of Packets (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The network connection of the server Sierra is very unstable and we've noticed a high rate of lost data packets (mostly between 25% and 75%).

    This appears to be another network issue at the data center, because all services on this server are running fine. We've already contacted the data center and await their reply with more information and a permanent resolution.

    Update 20.10.2013 07:37: We've opened another emergency ticket at the data center. There's no word from them yet.

    Update 20.10.2013 07:58: The data center seems to be working on this issue, as the connection seems to be very stable now.

    Update 20.10.2013 08:18: The VPS containers are unreachable. We're now investigating if this has anything to do with the network issue.

    Update 20.10.2013 09:07: All VMs are up and running again. There was an issue with Xen that prevented all VMs from booting up, which we have permanently fixed. This was probably caused by a forced reboot that we did earlier when the server was unreachable due to the data center's network issue.

  • Date - 20.10.2013 06:57 - 20.10.2013 09:07
  • Last Updated - 23.10.2013 15:08
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center
  • There is currently a network outage at the myLoc data center in Düsseldorf that affects some of our servers. The data center is already aware of this issue and their on-site team is working at full capacity to resolve it as soon as possible.

    We're sorry for any inconvenience this incident may cause. Thank you for your understanding.

    Update 19.10.13 17:18: Their network is still down and their team wasn't able to provide any more details right now, most probably because they're busy working on a solution. We'll continue to post updates as soon as we find out more.

    Update 19.10.13 17:20: According to the myLoc General Manager, their network should be back online in a few minutes.

    Update 19.10.13 17:24: The network issue has been fixed and all servers are back online. Thank you for your patience and sorry again for the trouble caused by this outage.

  • Date - 19.10.2013 16:34 - 19.10.2013 17:25
  • Last Updated - 19.10.2013 17:25
CDP Backup Replication (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • We have started to generate the daily backup through Idera/R1Soft CDP. Since no more remote CDP backups were generated since 07.10.2013 because of the MySQL issue, the server has to catch up with many files/data and will do a block scan of the entire hard drives. This is an intensive task and can possibly cause a slightly higher server load, but all websites should work perfectly fine while the backup runs. CDP backups lower their priority automatically when the server is busy. The backup needs 8 hours on average.

    Update 10.10.13 10:56: Estimated time remaining - 9 hours and 21 miuntes. The server load is actually low; currently at 3.79 and slow-downs are only noticed when above 8.0 (since we have 8 CPU cores). Looking good so far.

    Update 10.10.13 14:45: 38% complete and 7 hours left. The server isn't overloaded at all and runs smoothly.

    Update 10.10.13 15:22: 47% complete and 6 hours left.

    Update 10.10.13 17:55: 64% complete and 4 hours and 25 minutes left.

    Update 10.10.13 18:30: 70% complete and 4 hours left. Still no negative impact on the performance, so no need to worry.

    Update 10.10.13 19:35: 83% complete and just 2 hours left.

    Update 10.10.13 20:12: 91% and 57 minutes left.

    Update 10.10.13 21:28: The CDP backup task has been successfully completed.

  • Date - 10.10.2013 10:00 - 10.10.2013 21:28
  • Last Updated - 10.10.2013 21:46
MySQL Server Issues (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • The MySQL server won't start or restart on the server Neptune. Our system administrators are already looking into this issue and will try to resolve it as quick as possible.

    Update 08.10.13 15:15: We are still actively investigating this issue. The issue seems to be with the InnoDB engine. We're doing everything we can.

    Update 08.10.13 15:29: We're attempting to force InnoDB recovery.

    Update 08.10.13 15:41: MySQL is currently running in recovery mode for InnoDB. While the server is up, it's pretty much in a read-only state for InnoDB databases as we try to locate where the corruption is hiding.

    Update 08.10.13 16:24: Mysqlcheck is running. It will take a while due to the amount of databases.

    Update 08.10.13 16:33: All databases using the MyISAM storage engine are readable and writable. Databases using the InnoDB storage engine are read-only.

    Update 08.10.13 17:01: The performance is affected because of mysqlcheck, which is still running.

    Update 08.10.13 17:15: Mysqlcheck has picked a few corrupt databases so far. We don't know if these were corrupt before, but we'll try to repair or restore them if necessary.

    Update 08.10.13 17:54: Mysqlcheck has only got to letter D of of the database prefixes. This process will probably take about 2 or 3 hours to complete. Please do not panick about corrupted databases. Most databases run on MyISAM and there are only a few InnoDB tables that are affected so far. If there's anything that can't be fixed, we still have backups from the past 4 days. We must wait for Mysqlcheck to complete.

    Update 08.10.13 19:49: Although it's still unclear, we may have to restore all databases from our backup from 08.10.2013 at 01:00 (CEST). Please stand by for more updates, we're still investigating the options we have.

    Update 08.10.13 20:00: We will make dumps of all the databases and then attempt to re-import them. This would allow InnoDB to properly load the data back into its files and should fix the corruption issues. We're currently trying to see if we can get a list of just the databases with InnoDB tables in them. It should really be only those databases that need to be restored. We're waiting for a copy of the mysql folder to finish before we proceed so we have some sort of backup of the databases in their current state.

    Update 08.10.13 21:11: The mysql folder is still copying, but should be done soon. The hard drives and MySQL server are heavily overloaded due to this process and some websites are timing out since the server is unable to hold up with all requests/tasks.

    Update 08.10.13 22:20: The copy of the mysql folder is done. We're now attempting to dump all InnoDB tables and then restore them back. Again, this might take about 1 or 2 hours due to the huge amount of data. If this works, we can consider this issue resolved.

    Update 09.10.13 01:43: The dump and restore process is still running.

    Update 09.10.13 02:52: The InnoDB tables have been dumped. We're now trying to restore them. This could be the final step, but also the longest since there's a lot of manual work that must be done.

    Update 09.10.13 07:17: We are still working to fix the InnoDB tables. We kindly ask for your patience.

    Update 09.10.13 10:51: We need to drop the InnoDB tables and restore them from scratch. Overwriting the current ones didn't seem to work. The MySQL server remains offline so we can insert the data safely.

    Update 09.10.13 11:30: This is our last attempt to fix InnoDB. If this fails, we'll try one of the backups between the 8th and 4th of October.

    Update 09.10.13 11:53: Recovering the current InnoDB data seems impossible, no matter what we've tried so far. Seems like the only option will be to restore from a backup.

    Update 09.10.13 12:21: We have restored the data from 7th Oct, 2013 01:15:17 (CEST). We are waiting for the recovery to complete and will keep you updated. Once we fix this issue, we'll try to see if there's a way to restore the current data from today to a virtual machine and provide you with any missing data between 7th October and today, but we're not sure yet if that's possible at all. First we must try to make the data from 7th October to work properly.

    Update 09.10.13 13:15: The data from 7th October seems to be even worse on the first glance. The MySQL server is running again with the latest data from today and InnoDB in read-only mode. We'll see if we can make the data from 6th October work.

    Update 09.10.13 13:45: We must backup all current data remotely with R1Soft CDP Backup and then restore the databases from 7th October. This time from the R1Soft backups, not the local ones.

    Update 09.10.13 14:22:

    Here is the final procedure which is needed for this to work - there is no other way of doing it, we tried everything else.

    1. We will stop the MySQL server.
    2. We need to COPY existing mysql data folder.
    3. We will remove ibdata and any other InnoDB files from mysql folder.
    4. We'll start MySQL
    5. We'll start restoring backups from R1Soft/Idera CDP dated from 07.10.2013
    6. The MySQL server should be back and running.

    This procedure will take a few hours to complete (probably 5 hours), but we're quite sure that it's the only safe way to finally recover the MySQL service.

    We will proceed with this plan immediately. We will continue to keep you updated on a regular basis.

    Update 09.10.13 16:08: The existing MySQL data folder has been copied. Proceeding to step 3.

    Update 09.10.13 16:40: The MySQL server is back up, but still in read-only mode for InnoDB tables. We're now restoring all databases from 07.10.2013 through Idera/R1Soft CDP Backup. This will take a while since the CDP restore method is much more complex than simply copying files over.

    Update 09.10.13 19:09: The Idera restore has failed, so we'll try to restore the databases in smaller batches to see if that goes through. Otherwise we'll try a different restore point.

    Update 09.10.13 20:38: None of our restore points will go through MySQL. Out of 5 backups, none of them is accepted by the MySQL server. We're trying to find out what other options we have left. If everything fails, we'll have to do a bare metal restore, which will restore absolutely all data on the server. But that should really be a last resort.

    Update 09.10.13 20:49: One of our system administrators seems to have found a possible solution, but we don't want to make any false expectations yet. We'll return with more updates soon.

    Update 09.10.13 22:51: Restoring the databases containing InnoDB tables that we've dumped yesterday seem to work, but again, we don't know anything for sure yet. We have basically moved the old damaged database folders out of the way and created new ones to allow for the SQL files we had to be imported properly. It's still going on, so we will let you know when that is done.

    Update 09.10.13 22:55: Some websites that didn't open at all seem to open fine now. Hoping that this will work for all databases and will be something stable enough.

    Update 09.10.13 23:15: So far we haven't found any website that doesn't work, but we're still investigating. This pretty much seems to be a permanent fix so far - hopefully.

    Update 10.10.13 02:00: All databases with InnoDB have been recovered. The MySQL is running stable. All websites that we've randomly tested seem to work fine. So hereby we can say that this issue is finally resolved.

    We'll run a MySQL check and optimization for all databases tomorrow evening. We can't do this now because the nightly backup is running.

    Thank you everyone for your patience and understanding. Our apologies again for all the trouble caused to you and your clients. If you notice any further issues, please contact us.

  • Date - 08.10.2013 14:52 - 10.10.2013 02:05
  • Last Updated - 10.10.2013 02:10
cPanel URL redirections not working (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • It's been reported that the domain redirections to cPanel/WHM/WebMail are not working and return a 500 internal server error. This is only happening if you try to access a cPanel service by appending /cpanel, /whm or /webmail to your domain (e.g. http://exampledomain.com/cpanel).

    We believe this issue is related to the performance/security tweaks that we did last Saturday. Until we fix this issue, please use the URLs below (which we actually recommend to use always):

    cPanel: https://neptune.customwebhost.com:2083 OR http://exampledomain.com:2082
    WHM: https://neptune.customwebhost.com:2087 OR http://exampledomain.com:2087
    WebMail: https://neptune.customwebhost.com:2096 OR http://exampledomain.com:2095

    Replace exampledomain.com with your actual domain name.

    Update 08.10.13 12:15: This seems to be a CloudLinux/CageFS bug. We've already reported this to CloudLinux, Inc. and await their reply. Meanwhile, please use the URLs above.

    Update 08.10.13 12:51: Our assumption was almost correct. This issue was caused by a component that was included with the last CloudLinux update. We've updated the component and the cPanel redirections work properly now.

  • Date - 08.10.2013 11:00 - 08.10.2013 12:54
  • Last Updated - 08.10.2013 12:54
LiteSpeed Web Server Cache Enabled (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • To speed up the performance of PHP scripts, we have enabled the LiteSpeed Web Server caching, which can speed up PHP scripts by up to 50%.

    More information regarding this and other optimizations will follow next week.

    If you experience any issues that could be caused by this change, please let us know immediately.

  • Date - 06.10.2013 05:09 - 06.10.2013 05:09
  • Last Updated - 06.10.2013 05:14
MySQL Database Optimization (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • Due to increased table fragmentation, we will optimize all databases on the server Neptune. This is a resource-intensive task which could slow down all websites until it completes. Once it's done, some websites should operate a bit faster than before.

    We estimate this process to take about 8 hours to complete.

    Update 06.10.13 03:00: The database optimization is complete.

  • Date - 06.10.2013 22:30 - 06.10.2013 03:20
  • Last Updated - 06.10.2013 03:20
High Server Load (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We've noticed an abnormal server load on the server Neptune. This is caused by a complete backup process and a database restore process that are running concurrently. Both tasks are important, so we'll try not to stop them unless absolutely necessary. We've set them to run at the lowest priority.

    The estimated completion time is 3 hours. We're sorry if the overload affects the performance of your account(s).

    Update 14.10.2013 10:47: The database restore process has been completed. The backup process has 2 hours remaining, but the server load has dropped already.

    Update 14.10.2013 10:47: All tasks have been completed. The load is back to normal now and the server runs smoothly. If you notice any issues, please let us know.

  • Date - 04.10.2013 09:24 - 04.10.2013 12:57
  • Last Updated - 04.10.2013 12:57
Scheduled Maintenance - Server Reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We are going to reboot the server Neptune, following a high-priority CloudLinux and Kernel update. The estimated downtime is 5 to 10 minutes until all services become available again.

    Thank you for your understanding.

    Update 03.10.13 23:25: The maintenance is complete. All services are back online.

  • Date - 03.10.2013 23:20 - 03.10.2013 23:29
  • Last Updated - 03.10.2013 23:29
Network Outage at myLoc Data Center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc Data Center Düsseldorf
  • We have noticed an outage at the myLoc data center, which affects all our servers located there. The data center's website and helpdesk are also offline and the phone lines are busy (probably since all their clients are affected), therefore, we are not aware yet of what is happening.

    We will post updates immediately once we find out more details. You can follow the status of this outage in our billing area under Support -> Network Status.

    Update 28.09.13 09:27: All servers are back up, but myLoc has updated their status page stating that there's an issue with the IP backbone. Their case is still open, so we won't consider this issue permanently fixed yet.

    Update 28.09.13 09:35: According to the data center, they have fixed this issue and they don't expect further outages.

  • Date - 28.09.2013 09:04 - 28.09.2013 09:37
  • Last Updated - 28.09.2013 09:37
500 Internal Server Errors (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • We've received many reports that some websites return 500 internal server errors. We don't know yet what is causing this, but we're currently investigating this issue and trying to fix it as soon as possible.

    Update 21.09.13 08:29: We're recompiling PHP now. This should take about 15 minutes.

    Update 21.09.13 09:25: PHP won't compile. Our system administrators are trying to figure out a solution.

    Update 21.09.13 09:41: It seems to be related to cPanel's EasyApache. We're currently working together with cPanel, Inc. to resolve this issue (or bug, as it seems).

    Update 21.09.13 09:52: Almost fixed. The 500 errors are gone, but we're not quite done yet. We still need to add the PHP modules back that were removed during the troubleshooting.

    Update 21.09.13 10:23: We finally have a solution. This issue was actually caused by a CloudLinux update that interferes with EasyApache. We've worked together with cPanel, Inc. and they were able to find a workaround for this bug.

    PHP has been successfully recompiled and all websites open correctly again.

    We're sorry that it took quite long to fix this issue. If you're still getting 500 errors, please open a support ticket. Thank you!

  • Date - 21.09.2013 07:29 - 21.09.2013 10:24
  • Last Updated - 21.09.2013 10:44
Scheduled Reboot - Kernel Update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We will reboot the server Neptune following a kernel update. The server will be back online shortly.

  • Date - 20.09.2013 19:18 - 20.09.2013 19:43
  • Last Updated - 20.09.2013 19:19
Scheduled Maintenance - Rack Transfer (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • Due to ongoing modernization of the myLoc Data Centers in Düsseldorf, our servers will be moved by the on-site data center team in a brand new rack.

    The move of the server Neptune is scheduled this Friday on 20 Sept 2013 at 17:30 (CEST time zone). The downtime will take between 10 and 20 minutes.

    There is no action required from your side, other than taking note of this event.

    We’re sorry for any inconvenience this may cause. Thank you for your understanding!

    German Translation / Deutsche Übersetzung:

    Betreff: Benachrichtigung über geplante Wartungsarbeiten

    Sehr geehrte(r) Herr/Frau {$client_name},

    Bedingt durch laufende Modernisierungen der myLoc Rechenzentren in Düsseldorf, werden unsere Server durch das Team vor Ort in einem brandneuen Rack umgezogen.

    Der Umzug des Servers Neptune wird am Freitag, der 20. September 2013 um 17:30 Uhr stattfinden. Die Ausfallzeit beträgt zwischen 10 und 20 Minuten.

    Es ist keine Aktion von Ihrer Seite erforderlich, außer dass Sie den Zeitpunkt des Umzugs zur Kenntnis nehmen.

    Wir entschuldigen uns für etwaige Unannehmlichkeiten. Vielen Dank für Ihr Verständnis!

    Update 20.09.13 18:30: The server move in the new rack has NOT been executed yet. We're still awaiting an answer from our account manager to find out why the transfer hasn't been done and find out the current status. Sorry for any confusion this has caused. We'll keep you updated.

  • Date - 19.09.2013 17:30 - 21.09.2013 08:45
  • Last Updated - 20.09.2013 19:04
DDoS Attack Causes Loss of Packets (Resolved)
  • Priority - High
  • Affecting Server - Sierra
  • The server Sierra is losing up to 100% data packets due to a DDoS attack that is targeted to a neighbouring server (not owned by us).

    The data center is monitoring the network and actively null-routing the attacked IP addresses to mitigate the attack. At this point we can only wait and hope that the data center keeps the situation under control.

    We're sorry for the inconvenience caused.

  • Date - 04.08.2013 22:00 - 04.08.2013 22:44
  • Last Updated - 04.08.2013 22:44
Extension of network connectivity at myLoc (Resolved)
  • Priority - Medium
  • Affecting Other - myLoc Data Center
  • The myLoc data center in Düsseldorf has planned an extension of their network connectivity. Although they have taken intense preparatory measures to prevent downtime, there could still be very short outages or loss of packets during the times specified.

    Thank you for your understanding.

  • Date - 19.07.2013 06:00 - 19.07.2013 10:45
  • Last Updated - 18.07.2013 18:51
PHP update and Tomcat upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We will perform the following updates on the server Neptune:

    • PHP from 5.3.25 to 5.3.26
    • Tomcat from 5.5 to 7.0
    The PHP update may cause a few errors for approximately 10 minutes while the updated php.ini file synchronizes across all CageFS accounts.

    Tomcat 5.5 has reached its end of life and Tomcat 7.0 has the "Experimental" tag on it from cPanel. Although experimental releases have been tested, please be aware that they could still have some issues, but the'll get patched in future releases.

    Our apologies for any inconvenience the updates may cause.

    Update 14.07.13 16:15: The updates have been successfully installed and our tests haven't returned any issues. If you still notice any issues, please let us know.

  • Date - 14.07.2013 15:37 - 14.07.2013 16:18
  • Last Updated - 14.07.2013 16:18
Planned maintenance for the server Neptune (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of the regular maintenance, we will install software updates and a new kernel on the server Neptune. This requires a reboot of the server, which will cause an outage of approximately 5 to 10 minutes.

    Update 14.07.13 02:12: The server has been successfully updated and rebooted. All services are back online.

    Thank you for your understanding.

  • Date - 14.07.2013 02:00 - 14.07.2013 02:13
  • Last Updated - 14.07.2013 02:13
Server won't boot after kernel upgrade (Resolved)
  • Priority - Critical
  • Affecting Server - Sierra
  • The server Sierra doesn't boot up after a kernel upgrade. We are investigating this issue and will update you once we know more.

    Update 14.07.13 01:00: The server is back online. There was actually no issue, except that it took longer than usual for the server to reboot.

    Our apologies for the inconvenience.

  • Date - 14.07.2013 00:49 - 14.07.2013 01:01
  • Last Updated - 14.07.2013 01:05
VPS Node Vienna unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Vienna
  • The VPS node vienna.customwebhost.com is unreachable since a few minutes. We have tried to reboot the server, but we suppose that this might be a network-wide issue at the data center.

    We have already contacted the data center and await their reply. Updates will follow as soon as we gather more information regarding this issue.

    Our sincere apologies for the inconvenience.

    28.06.13 10:42: The server is back online. We still have to find out the reason of this outage.

    28.06.13 10:42: It turns out that the data center was hit by a heavy DDoS attack that overwhelmed their entire network, so it took them some time to nullroute the affected IP. Everything is back online now.

    Thank you for your patience and sorry again for the outage.

  • Date - 28.06.2013 10:11 - 28.06.2013 10:54
  • Last Updated - 28.06.2013 10:54
Planned MySQL and kernel update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The MySQL server has crashed two times this week, which is why we'll try to fix these stability issues by updating MySQL to the latest stable version that is available from cPanel. We will also update the kernel, which requires a server reboot. The whole process should take about 30 minutes and we estimate a downtime of about 5 to 10 minutes.

    We're sorry for the inconvenience caused by our planned maintenance, but this update is required to fix the stability issues that we've noticed lately. Thank you for your understanding.

    31.05.2013 23:58: The MySQL and kernel update has been successfully completed. Further more, we have also updated LiteSpeed Web Server to v4.2.3, which comes with some performance improvements. All software is now completely up-to-date and the server runs smoothly at the time of this posting.

  • Date - 31.05.2013 23:00 - 01.06.2013 00:05
  • Last Updated - 01.06.2013 00:05
Kernel update on Vienna - Reboot required (Resolved)
  • Priority - Medium
  • Affecting Server - Vienna
  • As part of our regular maintenance, we are updating the kernel and OS packages on Vienna. A server reboot is required after upgrading the kernel.

    The server will be down for approximately 5 to 10 minutes until the system boots itself and all VPS containers.

    We apologize for any inconvenience the this may cause.

  • Date - 13.05.2013 01:25 - 13.05.2013 01:33
  • Last Updated - 13.05.2013 01:18
Server unreachable (Resolved)
  • Priority - Critical
  • Affecting Server - Vienna
  • The server Vienna appears to be offline. We will try to access the server remotely to bring it back online and contact the data center. Updates will be posted immediately after we find out more.

    Update 10.05.13 00:28: The server has crashed. It is now back online after a hard reboot. We will proceed checking the server for possible faults. We're sorry for the inconvenience caused!

  • Date - 10.05.2013 00:14 - 10.05.2013 00:37
  • Last Updated - 10.05.2013 00:37
Slow response times and packet loss on Vienna (Resolved)
  • Priority - High
  • Affecting Server - Vienna
  • The VPS node Vienna is experiencing loss of packets and slow response times. This seems to be a network-wide issue. We'll contact the data center immediately to resolve this issue as soon as possible.

    29.04.13 21:07: The data center was hit by a heavy DDoS attack, which has already been mitigated. The response time is back to normal without loss of packets. We're sorry for the inconvenience.

  • Date - 29.04.2013 20:57 - 29.04.2013 21:07
  • Last Updated - 29.04.2013 21:12
500 internal server error after PHP update (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • After updating PHP from v5.3.23 to v5.3.24, the automatically generated php.ini file was set to load the Zend Guard Encoder before the IonCube Loader, which caused PHP to malfunction. All websites were returning 500 errors because of this.

    We were able to detect and fix this issue shortly afterwards. PHP loads properly and all websites open correctly at this time.

    Our apologies for today's issues. The server works fine now and no further maintenance is planned for this week.

  • Date - 25.04.2013 10:00 - 25.04.2013 10:26
  • Last Updated - 25.04.2013 10:26
Server Reboot - Critical OS upgrade required (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • Some important packages of the operating system require immediate update due to compatibility issues with the LVE (Lightweight Virtual Environment). A server reboot is implicitly required to apply the updates.

    Update 25.04.13 07:50: Almost done. The kernel requires an upgrade again, so we'll reboot the server once more within about 20 minutes. This case should be resolved then; no further reboots will occur afterwards.

    Update 25.04.13 08:27: All updates/upgrades are done and the compatibility issues have been fixed. The server runs smoothly now with the latest stable OS version (CloudLinux 6.4).

    If you notice any issues after this, please contact our technical support department. Thank you for your patience and understanding!

  • Date - 25.04.2013 07:35 - 25.04.2013 08:27
  • Last Updated - 25.04.2013 08:27
Server reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The server has been rebooted as part of our kernel upgrade process.

    The server is back online running smoothly. We're sorry for any inconvenience caused.

  • Date - 24.04.2013 16:37 - 24.04.2013 16:41
  • Last Updated - 24.04.2013 16:43
Packet loss on VPS node Vienna (Resolved)
  • Priority - Medium
  • Affecting Server - Vienna
  • We're currently observing a data packet loss of 25% to 100%. We will get in touch with the data center immediately to resolve this issue as soon as possible.

    Updates will follow once we have more information regarding this issue.

    09.04.13 10:59: The server doesn't lose any more packets since about 20 minutes, but we're still unaware of what the cause of this issue was. We'll update this case when we receive a statement from the data center.

    09.04.13 17:42: According to the data center, one of their core routers was hit by a massive DDoS attack, which they were able to mitigate in a timely manner.

    This issue should now be resolved. We're sorry for the inconvenience caused.

  • Date - 09.04.2013 10:41 - 09.04.2013 20:31
  • Last Updated - 09.04.2013 20:32
DWDM-path expansion of backbone route (Resolved)
  • Priority - Low
  • Affecting System - myLoc Data Center Düsseldorf
  • At the mentioned timeframe the myLoc data center will upgrade a backbone link between their datacenter DUS1 and their network hub DUS4. Since the maintenance is done on the backup path of that connection, there should be no impact on the regular network traffic.

  • Date - 10.04.2013 05:00 - 10.04.2013 09:42
  • Last Updated - 07.04.2013 00:46
Network issue at myLoc data center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc data center
  • We've been notified of a network issue that affects the entire myLoc data center in Düsseldorf.

    Updates will be posted here once we get more information. We're sorry for the inconvenience caused and kindly ask for your patience.

    Update 04.04.2013 18:25: The data center is back online. All systems are up.

  • Date - 04.04.2013 18:20 - 04.04.2013 18:26
  • Last Updated - 04.04.2013 18:26
Router Maintenance at Hetzner Data Center (Resolved)
  • Priority - High
  • Affecting System - Hetzner data center
  • The core routers of the Hetzner data center will be upgraded on 07.03.2013 between 05:00 and 07:00 UTC+1. This maintenance work will cause a downtime of up to 15 minutes during the specified time-frame.


    We're sorry for the inconvenience caused and thank you for your understanding.

  • Date - 07.03.2013 05:00 - 07.03.2013 05:20
  • Last Updated - 07.03.2013 05:20
High IO activity due to complete malware scan (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • We're running a complete malware and virus scan for all accounts on the server Neptune. Although the scan is set to run as an idle process at the lowest priority, the server might still be a bit slower than usual due to the high I/O activity of the hard drives.


    Due to the very large amount of files, the maldet scan should take about 12 hours to complete. Once we receive the scan report, we will contact each client regarding possible malware infections.

    Security Advice: For the safety of your own website/account, please always make sure to keep all your scripts up-to-date, including all associated templates, plugins, add-ons, etc. The hackers are actively looking for new vulnerabilities to exploit that can damage your website or business reputation as a result. Don't let this happen!

    Update 27.01.13 19:30: The scan has been completed. We will contact you soon if there were any malicious files under your account.

  • Date - 27.01.2013 07:14 - 27.01.2013 19:32
  • Last Updated - 27.01.2013 19:33
MySQL optimization & repair (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • Due to degraded performance of the MySQL service, we have started to backup all databases and then we'll proceed to repair and optimize all databases.


    We already have the daily MySQL backups generated by Idera/R1Soft and we'll also make a copy on the local drive to make sure that we can restore any database if something goes wrong.

    Due to the huge amount of databases, the server will be slower than usual. We estimate this process to take about 4 hours to complete.

    If you experience any issues during or after this process, please contact our technical support department immediately. They will gladly help you.

    Update 24.01.2013 02:30: All databases have been successfully repaired and optimized. If you notice any issues, please get in touch with our technical support department.

  • Date - 23.01.2013 23:05 - 24.01.2013 02:31
  • Last Updated - 24.01.2013 02:32
myLoc Data Center unreachable (Resolved)
  • Priority - Critical
  • Affecting System - myLoc Data Center
  • The myLoc data center seems to be unreachable at the moment. We'll try to get in touch with them and see what is going on.


    Updates will follow soon. We kindly ask for your patience and understanding.

    Update 21.01.13 18:32: All systems are back online. The cause of the outage is unknown at the moment, but we suspect it was a temporary network issue.

    Update 21.01.13 18:41: Our assumption was correct, it was a network glitch that has been fixed. Everything should be fine now.

    We're sorry for inconvenience caused by this short outage.

  • Date - 21.01.2013 18:28 - 21.01.2013 18:33
  • Last Updated - 21.01.2013 18:44
Kernel upgrade on Vienna (Resolved)
  • Priority - High
  • Affecting Server - Vienna
  • We are upgrading the kernel on the VPS node Vienna, which requires a reboot of the node and all VPS containers hosted on it. This maintenance is expected to take between 5 and 15 minutes.


    Sorry for any possible inconvenience and thank you for your understanding.

    Update 19:24: The VPS node and all VPS containers are back up and running. Thank you for your patience.

  • Date - 14.01.2013 19:17 - 14.01.2013 19:24
  • Last Updated - 14.01.2013 19:44
Routing issue at myLoc data center (Resolved)
  • Priority - Critical
  • Affecting Other - myLoc data center
  • There was a routing issue at the myLoc data center, which has caused a downtime of 14 minutes. The data center has replaced the core backbone router, so this issue should now be permanently resolved. No further downtime is expected. We're sorry for any inconvenience this may have caused.

  • Date - 20.12.2012 21:27 - 20.12.2012 21:36
  • Last Updated - 21.12.2012 05:43
Neptune crashes randomly (kernel panic) (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • This server has crashed twice randomly. Our technicians are already looking into this issue and will do their best to fix the kernel in a timely manner.


    Our sincere apologies for the inconvenience. Please stand by for more updates.

    Update 19.12.2012 17:20: There's either a faulty hardware or a buggy kernel. As an attempt to fix this issue properly, we'll proceed by upgrading the kernel and all CloudLinux components in a few minutes, which requires another server reboot, unfortunately.

    Update 19.12.2012 17:30: The kernel has been updated and the server is back up. If the server crashes again, which is unlikely, we'll check all hardware components and replace any faulty hardware if required.

    Update 19.12.2012 20:32: The server continues to run very stable so far. We'll consider this issue resolved. Thank you for your patience!

  • Date - 19.12.2012 16:23 - 19.12.2012 20:33
  • Last Updated - 19.12.2012 20:34
Network Connectivity Issue at myLoc (Resolved)
  • Priority - High
  • Affecting Other - myLoc Data Center
  • It appears that the myLoc data center has issues with their network. They are losing many data packets, which is the reason why our servers are unreachable from some locations.


    We'll try to gather further details about this and post updates here once we know more. Please stand by.

    Update 16.12.12 15:21: This seems to be fixed already. Apparently it was just a temporary issue. Sorry for any inconvenience this may have caused.

    Update 16.12.12 16:35: The network issue has been confirmed by the data center. Their network engineers are working at full speed on a permanent fix. The estimated completion time is 16:40 (GMT+1). We'll keep you updated if the issue persists.

    Update 16.12.12 16:40: This issue has been resolved.

  • Date - 16.12.2012 15:19 - 16.12.2012 16:40
  • Last Updated - 17.12.2012 15:27
Random server crashes (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The server crashes randomly since a few days, which is the reason of the recent outages (usually 5-7 minutes). We were unable to find any information in the system logs that would point out why the server crashes, but we're trying to find this out and then fix what's broken as soon as possible. So far we can exclude the possibility of an attack or lack of system resources. The server had more than enough free resources available shortly before it crashed and there were no significant attacks logged by the firewall.


    We're sorry for the inconvenience caused by these outages and assure you that we'll fix them in a timely manner.

    Update 28.11.12 23:25: It turns out to be caused by some accounts that overload the MySQL server with too many and too frequent queries. The MySQL server causes such a high I/O usage that it eventually crashes the server. Because of this, we must consider limiting the MySQL usage of each account by implementing the CloudLinux MySQL Governor. Further more, we also consider limiting the I/O usage because the performance is degraded by heavy-traffic websites.

    Update 10.12.12 19:35: The crashes also seem to be caused by the kernel, which we'll recompile during the week. We'll refrain from introducing the MySQL and I/O resource limits if the server doesn't crash after we recompile the kernel.

  • Date - 28.11.2012 19:39 - 10.12.2012 19:35
  • Last Updated - 10.12.2012 19:38
IPMI-Module reconfiguration (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The IPMI module of this server is having a technical issue, therefore, the data center technicians must reconfigure it so we can access the server remotely in case of emergency.


    The reconfiguration should take up to 30 minutes. During this time-frame, the server might be unavailable/offline.

    We're sorry for the inconvenience caused by outage. Thank you for your understanding and let us know if you have any questions.

  • Date - 22.11.2012 18:30 - 28.11.2012 19:48
  • Last Updated - 21.11.2012 16:11
Server unresponsive - reboot initiated (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • The server Neptune has suddenly became unresponsive 4 minutes ago, at 13:13. A hard reboot has been initiated and we'll investigate this issue further after the server is back online. The boot-up should take about 3 minutes from now.


    We're sorry for the inconvenience. Please stand by while we fix this issue.

    Update 19.11.12 13:19: The server is back online. Our system administrators will look into this further in order to prevent this from happening again.

  • Date - 19.11.2012 13:17 - 19.11.2012 13:29
  • Last Updated - 19.11.2012 13:30
Planned network maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Data center - myLoc Düsseldorf
  • As part of the data center's network improvement, the data center will replace central network components of their core network. The planned maintenance will be performed starting from Tuesday, 20th November 2012 to Friday, 23th November 2012, each night between 02:00 and 05:00 EST.
    This maintenance will lead to temporary outages or limitations of the network connectivity of individual areas and server connections. The data center will try to keep the downtime to a minimum and estimate a single, maximum downtime of up to 5 minutes per server/area.

    We apologize for any inconvenience that is caused by this and thank you in advance for your understanding. Our customer service is always at your disposal if you have any questions.

  • Date - 20.11.2012 02:00 - 23.11.2012 23:16
  • Last Updated - 10.11.2012 00:06
Scheduled server reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • As part of our regular maintenance, we need to reboot this server in order to apply some required updates. The boot-up process usually takes between 5 and 10 minutes. During this period the server will be unreachable.


    We're sorry for any inconvenience this may cause.

    Update 01.11.12 23:22: The server is back online.

    Update 01.11.12 23:44: Unfortunately, another reboot is required to complete the maintenance. The server has been rebooted again now.

    Update 01.11.12 23:49: The server is back online and the maintenance has been successfully completed.

  • Date - 01.11.2012 23:15 - 01.11.2012 23:49
  • Last Updated - 01.11.2012 23:49
Hetzner Data Center - Init7 Uplink outage (Resolved)
  • Priority - Medium
  • Affecting System - Hetzner Data Center / VPS nodes
  • The Hetzner data center currently has issues with the Init7 backbone. Therefore, they re-route their traffic through other carriers. Some clients will see higher latency during this outage.


    Update 21:30: Backbone is stable again. Init7 are still investigating what has caused these flaps. Further monitoring is in progress.
    Update 21:40: Flapping has ocurred again. Init7 are still researching.
    Update 22:55: Init7 will conduct an emergeny reload to upgrade their router in Frankfurt, Interxion to an new software release.
    Update 23:45: Emergency reload has completed. Router seems stable now.
    Update 21.10.2012 01:50: Further flaps have been detected. Investigation is still in progress.
    Update 21.10.2012 02:10: Init7 will upgrade their router in Frankfurt, Ancotel and shut down all the DECIX peering sessions. A router reload will be required.
    Update 21.10.2012 02:55: Router has been reloaded with new software. A possible cause for the flaps might be a software flaw. Init7 are investigating with their vendor.
    Update 21.10.2012 13:00: After the router upgrades and the temporary shutdown of all DECIX peering sessions, no further flaps have occurred.

  • Date - 20.10.2012 22:00 - 21.10.2012 13:00
  • Last Updated - 23.10.2012 07:35
Hardware firewall and server unresponsive (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • The hardware firewall is unresponsive, causing the server Neptune to be unavailable since the traffic must flow through the firewall first before it can get to the server.


    An on-site data center technician is currently investigating this issue and will update us with more information as soon as possible.

    We're sorry for the inconvenience!

    Update 13.10.12 12:47: Could be another DDoS, but this is just an assumption. The data center techs are still investigating and we're awaiting their reply.

    Update 13.10.12 12:56: Unfortunately, our assumption was correct. It is another DDoS attack, which is more aggressive than ever. Although we hired a Juniper firewall expert yesterday who maximized the potential of our firewall, the firewall can handle DoS attacks of up to 90.000 pps (packets per second). This attack has a rate of 300.000+ pps and targets not only the server, but also the router and firewall directly.

    Since this time our entire network is targeted, a simple change of IP is not possible. The data center has deactivated all IPs, hoping that the attack will stop faster. They keep monitoring the network and will inform us once the attack slows down or stops completely.

    Update 13.10.12 13:30: The firewall and some accounts with dedicated IPs are now back online. The main and shared IPs are still null-routed, but the data center techs are analyzing the attack and try to find a possible solution.

    Update 13.10.12 14:18: The main IP has also been re-activated. The shared IP is still the target of the attack and remains deactivated until the attack stops. We're looking for solutions and will update you ASAP.

    Update 13.10.12 14:40: We're changing the tactics. All resellers will be assigned their own IP address so we can narrow the possible target. This change needs to be done manually for each reseller, so this may take about an hour.

    Update 13.10.12 14:59: All shared hosting accounts have been switched to the IP 5.104.105.135. The reseller accounts are still in process.

    Update 13.10.12 17:30: All accounts (shared & reseller) have been switched to new IP addresses. You can find your IP address either in cPanel's sidebar or in WHM -> List Accounts. We'll now proceed by sending an email with more details and our plan to stop these attacks.

    Update 13.10.12 18:00: The DDoS has started again. We should now be finally able to find out who the target is. Awaiting reply from the data center. Please stand by.

    Update 13.10.12 18:00: Finally! The attacked IP has been tracked down and we were able to find out which reseller account was under attack. The attacked IP will remain suspended for now and all websites are now available again. Updates will follow by email.

  • Date - 13.10.2012 12:05 - 13.10.2012 18:43
  • Last Updated - 13.10.2012 18:43
Planned firewall maintenance (Resolved)
  • Priority - Medium
  • Affecting System - Juniper firewall/Neptune server
  • A Juniper firewall expert has been hired to examine and harden the configuration of our hardware firewall to better protect against DDoS attacks. Although the maintenance should not affect the operation of our services, short disruptions might possibly occur.


    If you notice any issues during or after the firewall maintenance, please contact us.

    Update 12.10.12 19:45: The server is currently not accessible from the outside. This is part of the firewall maintenance which is currently being worked on. The server should go back online within 15 minutes. Sorry for the inconvenience!

    Update 12.10.12 20:02: The server is back online. We're still working on the firewall, so some short disruption may still follow. Your patience is greatly appreciated while we sort this out.

    Update 12.10.12 21:57: The firewall maintenance is done! The DDoS protection has been significantly improved and should now protect us from attacks of up to 95.000 pps (packets per second).

  • Date - 12.10.2012 12:00 - 12.10.2012 22:00
  • Last Updated - 12.10.2012 22:31
DDoS attack (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • A new DDoS attack is hitting our shared IP address. The IP has been null-routed by the data center and we're now in the process of changing the shared IP address again.


    Old shared IP address (attacked): 5.104.105.132
    New shared IP address (replacement): 5.104.105.134

    The accounts with a dedicated IP address are not affected by this attack. Updates will follow shortly.

    Update 09.10.12 18:30: All sites have been successfully switched to a new IP address (5.104.105.134) and should be back online in a few minutes. If you can't access your website yet, please wait until your domain propagates to the new IP address. This process usually takes between 4 and 24 hours. You can try to speed up this process by clearing your DNS cache.

    We've sent an email to all clients with further information.

  • Date - 09.10.2012 17:50 - 09.10.2012 18:50
  • Last Updated - 09.10.2012 18:50
DDoS attack on Neptune - New shared IP addres (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • An IP address on the server neptune.customwebhost.com is probably the target of a DDoS attack. Although we recently installed a hardware firewall that is supposed to protect us against DDoS attacks, unfortunately, the severity of this attack si much larger and our hardware and software firewalls are unable to handle it.


    The data center technicians are currently trying to find out exactly which IP address is being targeted. We will either null-route this IP address or disable it permanently.

    Updates will follow. We're terribly sorry for the downtime.

    Update 05.10.12 14:29: The shared IP address 5.104.105.131 is the target of a very intense DDoS attack. Although we recently added a hardware firewall in addition to our software firewall, both have slowly failed to withstand the severity of this attack. Because of this, the shared IP address is currently disabled until the attack stops. All accounts that point to the shared IP address 5.104.105.131 are inaccessible until further notice.

    Since the DNS propagation would take between 4 and 24 hours, switching the currently affected accounts to a different shared IP address wouldn't be a very effective solution. Therefore, we'll need to wait a while for the attack to stop and then decide if we'll switch the accounts to a different IP address.

    The accounts with a dedicated IP address are online and not affected by the attack any longer. The server itself is also accessible through the main IP address. Only the shared IP address is currently affected and offline.

    Although we've invested a lot of resources to improve our servers and protect them from DDoS attacks by putting them behind hardware firewalls, unfortunately, there are still very aggressive DDoS attacks which cannot be stopped just with a firewall, but require a very complex and expensive infrastructure. Such solutions are of course beyond the capacity and budget of most hosting providers. Therefore, our only option for such aggressive attacks is to wait for the attack to slow down until it reaches a state that can be handled by our hardware firewall.

    We will keep you updated with the current status. Minor updates will be posted on our Network Status page and major updates will be sent by email.

    We sincerely apologize for the inconvenience. Be assured that we're doing everything we can to resolve this issue in a timely manner.

    Update 05.10.12 16:42: We're waiting for the data center to inform us if the shared IP address is still under attack. If it still is, then we'll switch all affected accounts to a different IP address.

    Update 05.10.12 17:30: The new shared IP address of the server Neptune is 5.104.105.132.

    If your domains point to our nameservers, there's nothing you need to do besides waiting until the DNS propagation completes. Otherwise, if your domain points to external nameservers, please update it with the new IP address above.

    All accounts are currently back online. If you can't access your website yet, please wait until your domain propagates to the new IP address. This process usually takes between 4 and 24 hours. You can try to speed up this process by clearing your DNS cache.

    Again, we're deeply sorry for the inconvenience and issues caused by the downtime. We'll keep monitoring the server to detect further possible attacks in a timely manner and try to mitigate them when necessary.
    Due to the large amounts of tickets that came in today, it will take a bit longer than usual to answer all tickets. We kindly ask for your patience.

    Please don't hesitate to let us know if there's anything else we can help you with.

  • Date - 05.10.2012 12:37 - 05.10.2012 17:25
  • Last Updated - 05.10.2012 20:48
Neptune unresponsive - Reboot required (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • The server neptune.customwebhost.com suddenly became unresponsive for a yet unknown reason. We have initiated a soft reboot and the server should be back any moment.


    Update 05.10.12 10:50: Although the server has rebooted, all services are unreachable. Our system administrators are looking into this right now.

    Update 05.10.12 10:59: It appears that our firewall is also unreachable, which explains why the server is up but not accessible from the outside. We have contacted our data center and they are investigating this issue on-site.

    Update 05.10.12 11:18: The server and firewall can now be pinged, which means that they're online, but with huge packet losses above 50%. Therefore, this actually seems to be an issue from the data center's end that affects us. Updates will follow once we know more.

    Update 05.10.12 11:32: The data center technicians are still investigating and we await more information soon.

    Update 05.10.12 12:35: Apparently it's neither a network issue nor a firewall outage. Unfortunately, we're the target of a DDoS attack again. We'll post more information about this as a separate network issue.

  • Date - 05.10.2012 10:25 - 05.10.2012 12:43
  • Last Updated - 05.10.2012 12:44
IPMI module reconfiguration scheduled (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The IPMI module of our server needs to be reconfigured by our data center after it suddenly became unavailable. The data center technicians will need to shut down the server to complete this task, so unfortunately, there will be a planned downtime of about 30 minutes.


    We're sorry for the inconvenience, but the IPMI module is very important for accessing and fixing the server if it becomes unavailable.

    Update: The IPMI module has been fixed without having to take the server offline. There was no downtime.

  • Date - 07.09.2012 18:00 - 07.09.2012 18:30
  • Last Updated - 08.09.2012 15:12
Installation of ASSP Deluxe (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • In order to improve our protection against incoming and outgoing spam emails, we will implement ASSP Deluxe. It is similar to SpamAssassin, but works more effectively as a SMTP proxy between the internet and the email server.

    The email service may not work properly until we're done with the installation and configuration of ASSP Deluxe. We apologize for the inconvenience.

    26.08.12 16:55: ASSP Deluxe has been successfully installed and configured. You can customize the settings of your account in cPanel -> SPAM ASSP.

    Please let us know if you experience any issues after the implementation of ASSP Deluxe.

  • Date - 26.08.2012 15:40 - 26.08.2012 16:55
  • Last Updated - 31.08.2012 10:58
Server reboot (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The server is being rebooted due to the R1Soft CDP backup processes hanging and causing a high I/O. It should be back online within 5 minutes.

    We're sorry for the inconvenience.

  • Date - 18.08.2012 12:19 - 18.08.2012 12:25
  • Last Updated - 18.08.2012 12:41
Server Migration (Apollo to Neptune) (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The announced server migration will start as planned today (14.08.12) at 21:00 GMT+1. Starting from that time we recommend to refrain from doing any changes to your hosting account or website.

    We will first transfer the shared and reseller accounts, and then transfer the sub-accounts of the resellers. If you have a reseller account and don't see any accounts on the new server, please check back again later.

    Below are the details of the new server:

    Server Name: Neptune
    Hostname: neptune.customwebhost.com
    Server IP Address: 5.104.105.130
    Shared IP Address: 5.104.105.131

    cPanel URL: https://neptune.customwebhost.com:2083
    WHM URL: https://neptune.customwebhost.com:2087

    Default/Shared Nameservers:
    ns1-neptune.customwebhost.com - 5.104.105.150
    ns2-neptune.customwebhost.com - 5.104.105.180
    ns3-neptune.customwebhost.com - 85.92.89.23
    ns4-neptune.customwebhost.com - 208.71.175.230

    The default nameservers of the old server (Apollo) will still continue to work, even after the server migration. You must use the above nameservers only if you add a new domain or create a new account, unless you have custom/private nameservers.

    Customers with dedicated IP addresses and private/custom nameservers will receive an email with their new IP addresses.

    When an account is successfully transferred, cPanel will lock the account on the old server and point its DNS to the new server. You will then need to wait for the DNS propagation to complete, which can take anywhere between 1 to 24 hours.

    The progress will be posted in here every 1-2 hours from the moment we start with the transfer.

    STATUS UPDATES:

    14.08.12 21:10: The migration has been initiated. We will start with the shared and reseller accounts (excluding sub-accounts of resellers).

    14.08.12 15:00: All shared and reseller hosting accounts have been migrated. We only need to migrate the sub-accounts of the reseller accounts. Things should go a bit faster from here.

    OVERALL PROGRESS: 100%

    All accounts have been transferred. We still need to make some tweaks and verify if things are working correctly before we officially announce the final completion. This will be done tonight, so if there's anything that doesn't work correctly tomorrow morning (17.08.) we kindly ask you to get in touch with us.

    Thank you very much for your patience! It was much appreciated during this whole process.

  • Date - 14.08.2012 21:00 - 18.08.2012 08:25
  • Last Updated - 18.08.2012 08:25
Server Migration Postponed (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  •  

    We're sorry to inform you that the announced server migration had to be postponed due to a last-minute complication. The migration has been re-scheduled for the 14th of August, 2012 at 21:00 GMT+1.

    The hardware firewall of the new server has been configured by the data center to run in routed mode. Our compatibility tests have shown that cPanel/WHM doesn't work properly with firewalls that run in routed mode, so we have requested the re-configuration of the firewall in transparent mode. This must be done before we can proceed any further.

    Please accept our apologies for the short notice, but we want to make sure that everything works exactly the way it should so we don't run into any unpleasant surprises. Our goal is to make this transition as seamless as possible for you.

     

  • Date - 14.08.2012 21:00
  • Last Updated - 14.08.2012 16:38
R1Soft CDP backups not working (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • We have upgraded the R1Soft CDP server and agent from v3 to the newest version, R1Soft CDP v4.0 Enterprise Edition. Due to compatibility reasons and to make sure that the backups are clean before the migration to the new server, we had to delete the old backups and initiate a completely new backup.

    The backup should finish within 24 hours. During this process, the server Apollo may encounter a slightly higher I/O activity, however, it should normally not cause the websites and services to load significantly slower. The server should be able to handle this just fine.

    29.07.12 11:00: The backups have failed two times. It seems like a bug, which we have already reported. We're awaiting a R1Soft technician to look into this and fix as soon as possible.

    30.07.12 18:00: A R1Soft technician has provided us with a potential fix. We're attempting to run the backup again.

    31.07.12 07:30: The CDP backups have failed with exactly the same errors again. Awaiting the investigation of a R1Soft technician again. Meanwhile, we have started the cPanel backups so we have these at least if something goes wrong. The server performance will be affected until the backups finish, but this needs to be done regardless.

    31.07.12 09:26: The cPanel backups eat up all resources and cause the server to crash. The server has been rebooted and will be up shortly. We'll now wait for the R1Soft tech to fix the CDP issue.

    02.08.12 16:00: We have contacted R1Soft again and asked them to place our ticket to their highest priority. A R1Soft technician is expected to look into our issue again today and eventually fix it.
    Just as a failsafe precaution, we recommend you to generate your own backup using the cPanel backup feature and download it externally. Your data is safe so far, but we still recommend having your own backups if something goes wrong unexpectedly.

    03.08.12 00:10: The R1Soft technicians are done with their work and the backups should work now. The CDP backup has now been initiated again and should finish within 12 hours.

    03.08.12 04:50: Only 5 hours left and no errors until now.

    03.08.12 12:45: The backups have failed again. We will re-install and re-configure the CDP Agent from scratch and contact R1Soft again.

    03.08.12 14:30: The CDP agent has been re-installed and re-configured completely. The initial backup has been started again. If the backups still fail now, it can only mean that the cause of this issue is a software bug. In this case we must wait for R1Soft to either provide a hotfix or release a new version of their software.

    04.08.12 04:30: The initial backup has failed again, but now we're getting closer to a solution. It clearly appears that this issue is caused by a software bug and not a misconfiguration from our side. We've sent a new report of our investigation to R1Soft and asked them to either fix this immediately or help us downgrade from CDP 4 back to CDP 3. The backups worked flawlessly for almost an year with CDP 3. Awaiting their reply.

    08.08.12 00:45: Still no solution from the R1Soft team yet. The new server is expected to get delivered on Thursday anyway, so we'll setup the R1Soft backups on the new server after we migrate the accounts over - unless the techs from R1Soft finally come up with a resolution by then, in which case we'd run the backups on the old server too until everything is done on the new server.

    12.08.12 21:00: Since the R1Soft techs still haven't found out a solution to our problem and we had to postpone the migration to the new server, we have setup the cPanel backups again just to be safe. This time they're incremental and without compression in order to avoid the server from overloading. The backups are scheduled to run daily at 2 AM.

  • Date - 28.07.2012 03:00
  • Last Updated - 12.08.2012 00:23
Apollo - Server reboot (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • 13:07: The server became unresponsive and is being rebooted right now.

    13:17: All services are back online. We will look into the cause of the crash/overload.

  • Date - 03.08.2012 13:06
  • Last Updated - 03.08.2012 13:17
Apollo - Server reboot (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • This server is being rebooted following some performance tweaks. All services should be back within maximum 10 minutes.

    30.07.12 10:41: The server is up again. Thank you for your patience.

  • Date - 30.07.2012 10:35
  • Last Updated - 30.07.2012 10:41
Apollo - DDoS attack (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • A DDoS attack (Distributed Denial of Service attack) has been identified, which targets the IP address 178.63.149.50.

    Due to the severity of the attack, the stability and performance of the server is currently affected. This causes all websites and services that run on this server to perform very slowly or not at all.

    We are truly sorry for the inconvenience caused by this attack. Together with the data center, we are working to mitigate the DDoS attack and try to bring the server back to normal. Be assured that we're already doing everything we can to get this issue fixed as soon as possible.

    22.07.12 01:11: It appears that the MySQL port 3306 is the main target of this attack. A request has been submitted to the data center to temporary block all connection attempts to this port. We're awaiting their reply to our request.

    22.07.12 01:58: The DDoS attack seems to have stopped and server runs smoothly again. We'll keep monitoring the server actively in case the attack remigrates.

    24.07.12 00:45: The server is again target of a DDoS attack. This time the web server is the target (port 80). We're already trying to mitigate the attack again.

    24.07.12 01:30: The attack seems to have stopped again. We will investigate this further and try to find out which website on this server is the prime target of these attacks. Once we find out, we will take the necessary action.

    Unfortunately, our current data center cannot provide us with the necessary tools and assistance to mitigate DDoS attacks. We assure you that we're already looking into finding a long-term solution as soon as possible. Our apologies again for the outages caused by these attacks.

    27.07.12 17:00: The DDoS attack has started again. We have ordered a new server already and will place it at a data center that provides DDoS protection. Once the server is setup, we will proceed installing all required software and migrate all accounts from Apollo to it. Meanwhile, we will try again to mitigate the current attack.

    27.07.12 20:50: The attack doesn't seem to stop. We will attempt to switch all domains that currently point to the shared IP (the target) to a new IP address and remove the current shared IP from the server completely. We hope this will put an end to this attack, at least temporary until we get to move the accounts to the new server when its ready.

    27.07.12 22:00: Our attempt to switch to a new shared IP address seems to work. The server is now operating smoothly again. The new shared IP address is 188.40.42.46. Further updates will follow soon by email regarding the upcoming migration.

    29.07.12 14:25: The DDoS has started again. We've rebooted the server and will try to mitigate it once again.

    29.07.12 14:40: The DDoS has been mitigated so far and we've rebooted the server. We'll keep monitoring this closely.

  • Date - 22.07.2012 00:12 - 29.07.2012 14:45
  • Last Updated - 29.07.2012 14:48
Apollo - Server not responsive (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • The server is not responsive, even after a reboot. We're investigating this issue and will update you once we have more information.

    Our sincere apologies for the inconvenience!

    Update (27.07.12 12:40): The server is currently back online. The server has crashed due to a yet unknown reason, which we're still investigating. We will definitely do an intensive health check for this server, as it had an unusually high amount of issues, as you may have already noticed.

    If necessary, we will consider the deployment of a new server and the migration of all accounts to it. However, this will need a few days of planning.

    Our sincere apologies for the inconvenience. We assure you that we'll do everything we can to restore the service to the optimal performance you are used to. We kindly ask for your understanding and patience while we work on fixing all these issue permanently.

  • Date - 27.07.2012 12:00 - 27.07.2012 12:46
  • Last Updated - 27.07.2012 12:45
Apollo - Performance degradation (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The performance of this server has recently decreased for a yet unknown reason. The accounts usage is normal, but an intense I/O is causing the server to process the request slowly.

    Our system administrators are already investigating this issue. Updates will follow once we know more.

    We're really sorry for the issues we had lately with this server. If further issues continue to appear, we will consider the deployment of a new server and migrate all accounts to it.

    25.07.12 11:00: The performance seems to be affected by the CDP backups, which could not be generated in the last 4 days because of the DDoS attacks. The CDP server needs to scan all files for changes, which is an I/O-intensive operation due to the staggering number of files. Once the backups are done, the server's performance should be back to normal.

    We will keep monitoring this server more closely. Thank you for your patience.

  • Date - 25.07.2012 10:00 - 25.07.2012 11:00
  • Last Updated - 25.07.2012 11:15
Maintenance at backbone Falkenstein-Nurenberg (Resolved)
  • Priority - Low
  • Affecting Other - Connectivity
  • Due to expansion our data center has scheduled maintenance work on the backbone between Falkenstein and Nurenberg.

    No downtime is expected during this process since the traffic will be routed via the Frankfurt backbone.

  • Date - 31.05.2012 03:00 - 31.05.2012 08:00
  • Last Updated - 02.06.2012 19:03
Network configuration issue (Resolved)
  • Priority - High
  • Affecting Server - Vienna
  • Due to the false configuration of an IP subnet, the server is having connectivity issues. The connection temporary interrupts for short periods.

    Our system administrators are already working on a fix. Updates to follow.

    Update 29.05.2012 02:25: The network settings have been corrected and the network connection seems to be stable. We'll keep a much closer look on this server the hours to come.

    Update 29.05.2012 02:30: There was an outage again today at about the same time. The server logs don't show any errors, so it's going to be a bit complicated until we get behind the root of this issue. Our investigations are ongoing. We sincerely apologize for the downtime!

    Update 29.05.2012 11:45: The network is back to normal. The root of this issue was the kernel module of our network card model. We have reconfigured the network card and it is working as expected.

  • Date - 28.05.2012 00:15 - 29.05.2012 11:45
  • Last Updated - 02.06.2012 19:03
Falsely reported average load, scheduled rebo (Resolved)
  • Priority - Low
  • Affecting Server - Madison (previously Neptune)
  • The average server load of the server apollo.customwebhost.com is falsely reported. You can notice an exaggerated average load in cPanel, under "Server Status". However, we assure you that the server is running stable and the performance is not degraded due to this issue.

    A server reboot is required in order to fix this issue and allow cPanel to display the real server load. The server reboot has been scheduled for today, the 3rd of May, 2012 at 21:00 GMT+1. It should take less than 10 minutes until all services boot and fully operate.

    Our apologies for the inconvenience and thank you for your understanding.

  • Date - 03.05.2012 09:00 - 03.05.2012 21:00
  • Last Updated - 03.05.2012 15:02
Upgrade to PHP 5.3 and MySQL 5.5 (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  •  

    1. Upgrade to PHP 5.3

    In consequence of PHP 5.2 reaching its end-of-life and being no longer supported, we will upgrade to PHP 5.3 on Monday, the 30th of April, 2012 at 23:00 GMT+1.

    While this upgrade should not cause any major issues for most web sites, it will probably cause some incompatibility issues for older scripts that have been developed prior the release of PHP 5.3. In order to prevent these incompatibility issues, it is highly advised that you upgrade all your scripts immediately. Besides, we always recommend keeping all your scripts up-to-date most importantly for security reasons.

    2. Upgrade to MySQL 5.5

    cPanel now supports MySQL 5.5 and we will be upgrading to this version right after we finish the upgrade to PHP 5.3.

    This new version has better scalability on multi-core hardware, is more reliable, and performs by up to 370% faster on Linux! A full list of what's new in MySQL 5.5 is available on the official site at http://dev.mysql.com/doc/refman/5.5/en/mysql-nutshell.html

    There are no known issues that we expect in connection with this upgrade.

    3. Closure of MySQL remote port

    Currently, the MySQL remote connections are accessible from any IP address without restrictions. This can pose a security threat since hackers can attempt to break into MySQL. In order to close this vulnerability, the MySQL port for remote connections (3306) will be blocked.

    If you need remote access to your MySQL databases, you'll need to provide us with the static IP address of the computer you'll be connecting from so we can add an exception to our firewall. Alternatively, if you have SSH access for your account, you can setup an SSH tunnel to forward the MySQL port.

    Setting up an SSH tunnel with PuTTY (Windows)
    How to Connect to MySQL via SSH Tunnel (Linux)

    As always, if you experience any issues after these changes, our technical support department will be happy to assist you.

     

  • Date - 30.04.2012 23:45 - 01.05.2012 01:00
  • Last Updated - 01.05.2012 02:36
Reboot due to kernel upgrade (Resolved)
  • Priority - Medium
  • Affecting Server - Vienna
  • We will upgrade the kernel of this server, which requires a server reboot.

    The expected downtime due to the reboot is about 5 minutes.

    Our apologies for the inconvenience.

  • Date - 27.03.2012 23:00 - 27.03.2012 23:05
  • Last Updated - 03.04.2012 16:15
DoS Attack (Resolved)
  • Priority - High
  • Affecting Server - Vienna
  • This server was affected by a low rate DoS attack, which caused a loss of data packets and the VPS to crash.

    The DoS attack has been mitigated and we've rebooted the server, which is running stable at the moment.

    We will monitor this server closely to prevent further issues.

  • Date - 27.03.2012 18:25 - 27.03.2012 18:40
  • Last Updated - 27.03.2012 13:19
Heavy DDoS attack on 188.40.42.55 (Resolved)
  • Priority - Critical
  • Affecting Server - Madison (previously Neptune)
  • A heavy denial-of-service (DoS) attack is targeting the server apollo.customwebhost.com (188.40.42.55). Our system administrators are already mitigating the DoS attack and trying to bring the server back online. We think the DoS attack might target a specific website that is hosted on our server. Once we block the attacker's IP addresses, the server should be able to operate properly again.

    We apologize for the inconvenience and ask for your understanding. More information will follow soon.


    UPDATE (05.11.2011 06:15): The DDoS attack is massive and comes from hundreds of different locations. There's no way we can mitigate this DDoS attack in a timely manner.

    We're switching all domains to a new IP address that isn't under any DDoS attack currently. The new shared IP address is 178.63.149.50. Please point your domain(s) to this IP address if you're not using our nameservers.

    All dedicated IPs remain the same. If you have a dedicated IP address this change won't affect your account at all.

  • Date - 04.11.2011 23:30 - 05.11.2011 06:15
  • Last Updated - 05.11.2011 10:34
Router Firmware Update (Resolved)
  • Priority - High
  • Affecting Other - Data Center
  • On 14.07.2011 between 05:00 and 07:00 GMT+2 the servers may become unresponsive due to a firmware update of the data center's main routers. The connection disruption may take up to 15 minutes.

    We apologize for the inconvenience. Thank you for your understanding.

  • Date - 14.07.2011 05:00 - 14.07.2011 07:00
  • Last Updated - 27.09.2011 15:24
Conversion to CloudLinux (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • We will convert the server's Operating System from CentOS to CloudLinux, which will help us improve the stability of the shared hosting environment. CloudLinux will prevent overloads caused by abusive accounts that use too many server resources.

    The conversion process should take just a few minutes and will require a reboot. We expect an outage of about 5-10 minutes during this process.

    Thank you for your patience and understanding.

  • Date - 27.09.2011 19:00 - 27.09.2011 19:19
  • Last Updated - 27.09.2011 15:23
High latency caused by damaged cable (Resolved)
  • Priority - High
  • Affecting Other - Data center in Falkenstein
  • The data center is currently experiencing issues with one of their backbone connections between Falkenstein and Frankfurt. During excavation work the fiber optic cable has been damaged. The traffic is currently re-routed through other redundant connections, however, this may causes higher latency and the websites may load slower from some locations.

    According to the backbone provider, LWL, a team is already on site working to replace the damaged fiber optic cable as quickly as possible.

    We're sorry for the inconvenience and would like to apologize for the performance issues caused. Your patience is much appreciated!

     


    UPDATE: The fiber optic cable has been replaced, all backbone connections are running properly again.

     

  • Date - 06.09.2011 15:20 - 06.09.2011 23:30
  • Last Updated - 07.09.2011 00:16
Installation of LiteSpeed Web Server and eAcc (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • In order to boost the performance of the web server, we will switch from Apache to the LiteSpeed Web Server. This will reduce the server load and significantly improve the speed of your web sites.

    The web server may become unavailable for about 5 to 20 minutes during this process. We apologize for the inconvenience.

     


     

    Update: LiteSpeed Web Server has been successuflly installed. The downtime of the web server was almost 3 minutes.

    Additionally, we've also installed PHP eAccelerator, which increases the performance of PHP scripts by caching them in their compiled state, so that the overhead of compiling is almost completely eliminated.

    We can already report a significant boost in performance. The server load has decreased by about 75% and the pages load faster.

  • Date - 00.00.0000 - 00.00.0000
  • Last Updated - 17.04.2011 03:13
Failure of the core router at the Falkenstein (Resolved)
  • Priority - High
  • Affecting Server - Madison (previously Neptune)
  • The data center is currently having a network disturbance. One of the core routers is not functioning. While the second router has taken over the traffic for complete redundancy, it doesn't seem to work properly.

    The network engineers are working on an accurate diagnosis of the problem.

    We're sorry for any inconvenience and thank you for your understanding.

     


    Update (11.03.2011 10:35): The router has been re-activated, everything is working properly at the moment. The network engineers are still monitoring the network closely in order to prevent an eventual disturbance.

     

  • Date - 10.03.2011 23:44 - 11.03.2011 10:35
  • Last Updated - 12.03.2011 11:35
Switch of PHP handler to suPHP and installati (Resolved)
  • Priority - Medium
  • Affecting Server - Madison (previously Neptune)
  • At MaxterHost, the safety of our servers is one of our highest priorities. In order to improve the security of the servers, we are going to switch the PHP handler from DSO to suPHP and install Suhosin, which adds a much greater security to PHP.

    The requirements of suPHP are a bit restrictive, but safe:

    • Inside the public_html folder, folders require permissions lower than 755; 777 or 775 will cause issues
    • Inside the public_html folder, files require permissions lower than 644; 777, 775, 666 and 664 will cause issues
    • Inside the public_html folder, all files and folders must be owned by the user and the user's group; any root or nobody files or folders will cause errors
    • Custom php_values in .htaccess files are no longer allowed. If you need to set a custom PHP value, you need to do it in a php.ini file and a line added to .htaccess to call the custom php.ini. Please contact our technical support department for further details.

    Most accounts can be affected by incorrect permissions and ownership. This will not allow PHP scripts to run properly, causing an internal server error (error 500). We will run a script which sets the correct file and folder permissions, however, you may probably still experience issues with your PHP scripts.

    The switch to suPHP is scheduled for the 7th of June, 2010 at 01:15 GMT+1. Should you experience any issues with your PHP scripts, then please contact our technical support department, preferably including reproduction steps and exact errors/notices. We will proceed installing Suhosin about a week after we've tracked down any issues caused by suPHP.

    Sincerely,
    Stefan Popescu
    Management
    MaxterHost.com

  • Date - 07.06.2010 00:00 - 14.06.2010 00:00
  • Last Updated - 21.06.2010 15:08
Network issue Nürnberg - Frankfurt (Resolved)
  • Priority - Medium
  • Affecting System - Shared & Reseller Servers
  • Currently, there is a disturbance of the backbone link between Nuremberg and Frankfurt at the Hetzner data center. This can cause short accessibility issues to all shared and reseller hosting servers. We will post updates here as soon as we have more information available.

    We apologize for the inconvenience and thank you for your understanding.

    UPDATE: The issue is caused by an internal attack of over 50Gbit. The data center is working on a solution.

    UPDATE: The attack is still continuing, but the data center team has already taken action against it which should already begin to show effect.

    UPDATE: The LambdaNet uplink still has some minor problems, but they are almost unnoticeable.

    UPDATE: Traffic is currently discarded to the uplink. There should be no more problems.

  • Date - 06.05.2010 14:13
  • Last Updated - 06.05.2010 19:15
Server Migration (Resolved)
  • Priority - High
  • We will be migrating the accounts from the Nottingham server to a new server with Intel Core i7 920 processor and 12GB of memory for better performance. The new server's location is the Hetzner data center in Falkenstein, Germany.

    The migration is scheduled to start today (17.07.2009) at 16:00 (GMT+1) and will probably be completed at 20:00 (GMT+1). The new nameservers and account details will be sent immediately after we complete the migration.


    The migration has been successfully completed at 19:00 (GMT+1).

    Please feel free to contact us if you're experiencing any issues after the migration.

  • Date - 17.07.2009 16:00 - 17.07.2009 19:00
  • Last Updated - 17.07.2009 18:53
Migration to cPanel (Resolved)
  • Priority - High
  • The cPanel migration team have started the transfer diagnose and will proceed moving the accounts over to the new server once we have positive results. We ask that during this process you do not make any changes to your domains (unless you really have to) so we can have a clean migration.

    Right now it is unknown how long the entire transfer will take, we will let you know as soon as possible. It is our highest priority. Updates will follow.


    A bug *may* have been detected in cPanel's migration tool for Plesk. A cPanel developer is working on a quick fix so the migration can be resumed.

    As you can see, they are actually making sure that everything is transferred correctly. This gives us the peace of mind that it will be a smooth transfer without major issues.


    The migration has been successfully completed!

  • Date - 02.07.2009 17:00 - 12.07.2009 00:00
  • Last Updated - 17.07.2009 12:31
[Nottingham] Server down (Resolved)
  • Priority - Critical
  • Server was not responding and after contacting data center technicians, it appears to be having some problems on its hard disks we are working on fixing them now.


    The RAID card replacement did not help to get the server back online. OS was reloaded on this server and cPanel is currently being installed on it. We are restoring the data on the server in the meantime.


    We personally apologize for the pathetic lack of updates. This is not standard policy and you all should have been updated much sooner and routinely of progress. This is being remedied now.

    We are working as fast as possible to get full services restored at this time.


    The data is still being restored on Nottingham. We will keep you updated further.


    cPanel configuration files and MySQL databases were restored. The accounts' home directories are still being restored.


    About 35% accounts' home directories have been restored on the server.


    The user data are still being restored, http server is being built too.

    Updates will follow.

  • Date - 20.04.2009 09:00 - 00.00.0000
  • Last Updated - 27.04.2009 09:08
Overload (Resolved)
  • Priority - High
  • The server mercury.customwebhost.com is being overloaded since it has been upgraded to Plesk 9. The operating system needs to be updated and a new kernel needs to be installed in order to fix this issue. This will involve a short downtime of approximately 5 minutes while rebooting the server.

  • Date - 16.02.2009 23:30 - 17.02.2009 02:30
  • Last Updated - 18.02.2009 09:40
Outage due to kernel update (Resolved)
  • Priority - Critical
  • The server mercury.customwebhost.com fails to boot after an OS and kernel update. System administrators are working on recovering the system.

  • Date - 17.02.2009 05:00 - 17.02.2009 00:00
  • Last Updated - 18.02.2009 09:40
Change of IP Address [83.133.127.40] (Resolved)
  • Priority - High
  • Due to the IP address 83.133.127.40 being blacklisted in some databases and the inability to unlist it, we must unassign it from all domains pointing to it.

    All accounts pointing to 83.133.127.40 will be pointed to the IP address 83.133.122.240. If your domains are pointing to our nameservers, then there is no action required from your side. Otherwise, you would have to point your domains to 83.133.122.240.

    After the IP address has been switched, users with a cached DNS lookup may not be able to access the affected domains until they're updated with the new IP address.

    We apologize for the change of IP addresses with a short scheduled window, but this change is required in order to experience a proper level of service.

  • Date - 16.02.2009 23:30 - 17.02.2009 02:30
  • Last Updated - 18.02.2009 09:40
[Mercury] Upgrade to Parallels Plesk 9.0 (Resolved)
  • Priority - High
  • An upgrade from Plesk 8.6 to Plesk 9.0 is scheduled. Delays may be possible and there is no planned downtime caused by the upgrade, but some services may restart or be unavailable several times for a few minutes during this process.

    For more information, please contact semidedicated@maxterhost.com.

  • Date - 15.02.2009 12:30 - 15.02.2009 15:00
  • Last Updated - 16.02.2009 11:08

Powered by WHMCompleteSolution