Saturday, January 29, 2011

Stopping a DOS attack

One of the sites I work with has recently started to get DoS'd. It started out at 30k RPS and now it's at 50k/min. The IP's are pretty much all unique, not in the same subnet, and are in multiple countries. They only request the main page. Any tips on how to stop this?

The servers are running on Linux with the Apache as the web server.

Thanks

  • Does your front-end router/load-balancer not have DOS-attack management? Ours does and it makes a world of difference.

    William : The problem is, ALL ip's are unique, from different countries, etc. There is really no way to tell the attacker from a legitimate user. ALL of our bandwidth is being eaten up right now, we can'd do anything.
    Chopper3 : But DOS-managing routers and load-balancers don't care where traffic comes from, if they see a lot of a certain type of DOS-related traffic from certain IP's then they ignore it and carry on with their work regardless, allowing servers to server and customer traffic to be treated correctly. People like Cisco and Foundry make a lot of money from their work in this area and what you're seeing isn't out of the ordinary at all.
    From Chopper3
  • You're not just trying to withstand a DoS, you're trying to withstand a DDoS, which is distributed and much more difficult to deal with.

    Essentially, you're trying to identify illegitimate traffic and block them. Ideally, you want to null route this traffic (even better get your upstream providers to null route it.)

    The first port of call is identification. You need to find some way to identify the traffic that is being sent to your host. Whether it's a common user agent, whether it's the fact that they're not actually using a proper browser (HINT: do they act like proper browsers - i.e. follow 301 redirects), whether all requests flood in at the exact same time or by how many requests each IP is hitting your server per hour.

    You cannot block them without identifying them and you need to find some way of doing that.

    Those DDoS mitigation tools essentially do the same thing, except in real time and cost a bomb. Half of the time there's false positives or the DDoS is so big it doesn't matter anyways, so be careful where you put your money here if you do decide to invest in one of them either now or in the future.

    Remember: 1. IDENTIFY 2. BLOCK. 1 is the hard part.

    William : The problem isn't blocking, the problem is identifying. You can't block something if you can't identify it. So far we've seen no patterns at all. Real browsers, no patterns in request times, completely different countries, no referrer, they follow redirects, they accept cookies. They act just like normal users. This looks like it's almost impossible to tell. We're thinking of routing all traffic to Amazon, have Amazon handle all the requests for the homepage that will be cached, and all other pages handled by our web app for now. Thanks for the answer though.
    pboin : Minor correction: they're probably *not* real browsers, keep that in mind when working on identification. Also, what's your user-base look like? If it's all US-centric, you might want to block offshore requests as a stopgap to buy you a little bit of breathing room...
    William : They're not "real" browsers in a sense that they're using Firefox, Chrome, etc for their requests. One thing you'll notice is how I said these are unique IP's, running for hours, at that high of a RPS. This "person" has a HUGE botnet apparently, even our data center (ThePlanet) can't figure out a way to stop it either. It's not very easy to tell if it's a browser or not. If it follows redirects, stores cookies, etc how do you tell? Besides you need to remember something, each request is unique. So banning an IP means nothing. The requests need to be blocked before they hit our server.
    Phil : Non-browsers, or text based browsers don't tend to run javascript? What user agent header are they supplying also?
    From Phil
  • You're assuming that this is an intentional DDoS. The first thing to try is changing the IP address. If it's not in fact intentional, then it will stop.

    Where would these requests be coming from if it's not intentional? It could be random, or it could be a mistaken target. Unlikely, but worth a try.

    Are you sure you're not just getting loads of legitimate traffic? Maybe you've been slashdotted, or something. Try looking at the referrers in the logs.

  • You can ask your upstream provider to ask their upstream to assist. Lets say for instance that you run a website with UK users only. Then you can check where the traffic in general originates from using some whois database. Lets say for instance that a significant amount of your unwanted traffic happens to originate from russia, china and/or korea. Then you can call up your upstream provider and have them call theirs to have them nullroute your IP addresses temporarily from these areas, assuming they have routers close to the sources.

    THis isnt a long term solution but it helps if your userbase is clustered in a few geographical areas. In the past Ive helped customers like this, simply not announcing them to foregin peers, just national ones. THis did take some of their business away (users that found them unreachable because they werent available internationally anymore) but its alot better than just beeing out of service alltogeather.

    But at the end of the day this is more of a desperate act. But its better to cut of a limb than loose the body.

    If youre in luck your upstream providers provider has the equipment and are willing to help you filter most of the undesired traffic away.

    Good luck :-)

Hardware/BIOS Monitoring Tools with Email or other Reporting Functionality

I am looking for recommendations for good software that can report the status of hardware it is installed on through an email or IP Connection.

A bit of background, my organization has 160 locations across the state that host a server at each location. These locations do not have any consistent network connections or a steady connection to a data collection or reporting server, they do however have access to the internet and so can report over email or other indirect reporting tool.

I am looking for some kind of software that can monitor the hardware status, particularly the health of the mirrored raid arrays on the server and report back when that hardware shows danger levels or the raid array becomes unhealthy. We currently use Windows Server 2003 enterprise in our environment.

We currently use IBM System x3200 and System x3200 M2 servers in our environment. We have looked at IBM's web page, but we have not found any good monitoring software that meets our needs.

Any recommendations for software that could handle this would be greatly appreciated.

Thank you.

  • Seems like this capability is included with the servers. You probably just need to install and configure the agents on each server, and setup a management console on one workstation

    IBM Systems Director 6.1 Technical Paper

    ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/xbw03006usen/XBW03006USEN.PDF

    "Simplified interface to trigger events automatically, such as e-mail notification or task execution. " (page 9).

    Laranostz : Looking into it now. :) If it tests out well, I'll take this as my solution.
    Laranostz : It appears that the System Director Software will not serve my needs. The system director collects information from other machines using an agent, then shoots off reports. We do not have a consistent connection to our equipment that would allow the use of this software. I need something that monitors the RAID array directly on the server, then independently sends off an email notification if it fails.
    From Greg Askew
  • Solution Found, also by IBM, specifically the company that made the RAID array in our model of servers.

    Greg Askew : Glad to hear! Any details? (raid vendor, is it a service that runs, etc).
    Laranostz : It is a program offered by the RAID vendor, LSI, called MegaRAID Storage Manager. It creates a monitoring service, then allows you to manage all physical and logical drives on the machine. It has error logs that can attach to windows system logs, and you can customize the errors to set severity and action when they occur, including sending an email off to any email you choose and any SMTP server you choose. The email includes the error, as well as the name of the server generating the error. http://www-947.ibm.com/systems/support/supportsite.wss/docdisplay?lndocid=MIGR-5077712&brandind=5000008
    From Laranostz

patch management for software compiled from source

I'm doing research into building a lamp server in ubuntu or debian for an intranet. Pretty much everything I have seen recommends building from source, chrooting all the components and such for security and the most up to date versions of everything. For the specific packages (apache,mysql,php) would I be better off to just stick with the debian/ubuntu packages instead? The client is pretty hardcore into microsoft products and this is the first attempt at using linux in a big way at this company so I want to be able to present something very solid to the management.

  • I'd say just install the default packages from your distribution. If you want to chroot your apache installation, you can install the mod_chroot package. In Debian/Ubuntu, that would be libapache2-mod-chroot.

    As far as being up-to-date, just make sure to be using a supported version of the OS (e.g. Debian Lenny or Ubuntu Hardy LTS), to have the security repositories activated in your sources.list and to update your packages on a regular basis.

    If your client is a Microsoft fanboy, he's surely not used to building things from source and it shouldn't shock him to use already packaged products. If you build from source, just know that you will have to follow security advisories yourself, and patch and rebuild the software every time it happens, instead of just upgrading the packages.

    I know there's a lot of 'security' freaks around who say you should build everything from source, put everything in chroots and lock your computer in a bank safe. The main problem I see with this is that it won't necessarily increase your security, but it will certainly make it very hard to administrate (just explained why in the previous paragraph) and in the end it will end up less secure than using standard packages and keeping them up-to-date with a sound configuration.

    controlfreak123 : I'd be curious to hear why you don't think chrooting would make it more secure but good points in the second paragraph. The main obstacle to getting linux products into the system isn't so much that the management dislikes non-windows as much as the current sys-admin thinks that they are unsafe because he himself doesn't understand linux. If this is accepted by the company I would be the one in charge of administration anyways but I liked your response best so thats why I chose your answer over the other one! Thanks for the info.
    freiheit : If you can execute arbitrary code as root, you can escape from a chroot. Chroot can be used to help build security tools, but was never designed with that purpose and breaks when you try. Besides, most attackers are primarily interested in making outgoing network connections, which they can do from inside a chroot with or without root.
    Raphink : Automating package updates in a chroot is a harder thing than when you're not using one, although it is very doable. In a lot of situations I've seen, programs installed in chroot where hardly ever updated, thus resulting in a far worse security situation than if a non-chrooted program was used and kept up-to-date.
    Raphink : Chroots are good, but they often give a false sense of security, unless you really know what you're doing and why.
    From Raphink
  • It's almost always better to use the packages from the distribution.

    1. They're easy to update with standard tools (apt, etc). Even possible to do automatically.
    2. The packages have been tested; less likely to break in some weird way.
    3. All the dependencies are handled automatically for you
    4. It's much easier to document so somebody else can take over when you move on. "Standard Ubuntu apache packages" vs. page after page of exactly which options you picked when compiling things, where everything is installed, etc...
    5. Something client can understand how to manage the basics of. "To update, run these two commands on a regular basis", vs. "to update the OS run these commands on a regular basis; to find out about updates to httpd sign up for this mailing list; to update httpd download the latest file from this site and run these commands".
    From freiheit

Set default MySQL connect charset for PHP (in RHEL)?

We're running a hundred or so legacy PHP websites on an older server which runs Gentoo Linux. When these sites were built latin1 was still the common charset, both in PHP and MySQL.

To make sure those older sites used latin1 by default, while still allowing newer sites to use utf8 (our current standard), we set the default connect charset in php.ini:

mysql.connect_charset = latin1
mysqli.connect_charset = latin1
pdo_mysql.connect_charset = latin1

Specific more modern sites could override this in their bootstrapping code with:

<?php
mysql_set_charset("utf8", $dsn );

...and all was well.

Now the server is overloaded and we're no longer with that hoster, so we're moving all these sites to a faster server at our standard hoster, which uses RHEL 5 as their OS of choice.

In setting up this new server I discover to my surprise that the *.connect_charset directives are a Gentoo specific patch to PHP, and RHEL's version of PHP doesn't recognize them! Now how do I set PHP to connect to MySQL with the latin1 charset?

I thought about setting a default in my.cnf but would prefer not to force every app and client to default to latin1. Our policy is to use utf8, and we'd like to restrict the exception to PHP only. Also, converting every legacy site to properly use utf8 is not doable since many are of the touch 'm and you break 'm kind. We simply don't have the time to go fix them all.

How would I set a default mysql/mysqli/pdo_mysql connection charset to latin1 for PHP, while still allowing individual scripts to override this to utf8 with mysql_set_charset()?

  • default_charset = "latin1" Should do the trick, placed inside php.ini.

    Edit: This obviously isn't exactly the same thing, so you may have better control by using this .htaccess directive for each of those old domains:

    AddDefaultCharset ISO-8859-1 Though I haven't tested it.

    Dan Carley : Those two options only affect output character encoding from PHP and Apache. It won't affect how data is read from MySQL, further down the chain.
    Martijn Heemels : Those options determine how PHP outputs to the browser, not how it retrieves data from the database. So I'm afraid they won't work.
    Martijn Heemels : Also, be careful with the default_charset option, since that *forces* all output to be the specified charset, and cannot be overridden by a .htaccess or a tag.
    From gekkz
  • Might this do what you are after?

    mysql_query('SET NAMES latin1');
    

    (Preferable called right afterwards you've established the database connection.)

    Dan Carley : This performs the same bootstrapping function as `mysql_set_charset()` but isn't the recommended approach as per the notes section of http://php.net/manual/en/function.mysql-set-charset.php
    Martijn Heemels : Like Dan says, 'mysql_set_charset()' is the recommended version of that statement. I'd still need to modify every single site though. Maybe I could use PHP's auto_prepend_file?
    andol : While mysql_set_charset() might be the recommended approach, it requires PHP 5 >= 5.2.3, which currently isn't available in RHEL. The PHP version in RHEL 5.4 is 5.1.6.
    Martijn Heemels : @andol, good that you mention it. We're using Zend Server, which includes PHP 5.2.11. Before you ask, it doesn't recognize the *.connect_charset directives either.
    From andol
  • Well, after some searching it appears the mysql*.connect_charset is a Gentoo specific patch. I've found no way to get the same specific behaviour with RHEL's default PHP package or Zend Server's PHP stack.

    I've resorted to defaulting MySQL to use latin1, because the majority of sites on this server are legacy. New sites have their charset defined explicitly so they will override the default.

Domains, DNSes and No-Ips. Point a Domain to a no-ip.org address (using a free service?)

I have a problem at the moment where i had registered a domain + hosting but as i was very unhappy with the hosting, i cancelled the plan and just kept the domain.

I decided to host the code locally as this is just for the scope of an assignment. I am using no-ip.org in order to point to my dynamic ip. My question is, is there a way to make my domain point to my no-ip.org url ?

Please only suggest any free services as this is just an assignment and as i purchased a domain, i might as well try and make use of it!

I was thinking of some sort of dns service which in some manner can point to the no-ip address or so? Or perhaps a free webhost which allows me to use my domain and then simply provide an html code for redirect? dont know, not really familiar with dns/ domain etc business so i would really appreciate any tips (and urls to relevant sites) please!

Thanks a lot

  • I'm not familiar with no-ip.org, but I've done this sort of thing in the past with dyndns.org, which provides free DDNS services. First, I set up an account with dyndns; chose the name example.dyndns.org (it doesn't matter what name you choose, but you'll need it for the next step); then made www.example.com a CNAME to the dyndns hostname (example.dyndns.org). Substitute your domain name for example.com, of course.

    Erika : this doesnt really seem to be free unless i'm missing something =\
    Darren Chamberlain : http://www.dyndns.com/services/dns/dyndns/.
    Erika : yes, but the cname but doesnt seem to be free
    Darren Chamberlain : You choose your .dyndns.org hostname, but then you set up the CNAME under the domain you had registered.
  • I just went from using CNAMEs to dyndns A records to just using a DDNS service directly. I know dyndns does it but I'm not sure that part is free, I'm currently using Everydns.net instead.

    Just add the domain you want to use to the account and run the update client on your server. To make the everydns DNS authorative for your domain you need to change the DNS servers it uses - normally made by the control panel provided by your registrar.

    Erika : i've really tried this using the redirect option they have but it doesnt seem to be working for some reason :( do you have any idea which dnses i should be using i used 3 and 4 maybe i'm supposed to add the first 2?
    Oskar Duveborn : What redirect option? Use their dynamic DNS option instead of no-ip or there's little meaning in switching DNS servers ^^ Yes use all four of the DNS servers, not that it should matter for your "not working" problem though. Changes to authoritative DNS servers can take anything from a few hours to three days or so though.
    Erika : They have an option if i understood correctly to make a domain hop to another domain and thats what i really need, thats what i meant by redirect. on my domain i only have the option of adding dns addresses only tho
  • Just make your domain name's @ record a CNAME to your no-ip.org domain. Also, make the TTL very low since your IP might change and you don't want cache issues to get on your users.

    Erika : how does one go about changing the domain's @ record pls?
    Raphink : If you bought a domain name, your registrar probably provides an interface to manage DNS entries. Set the entry for the domain name to be a CNAME to your `no-ip.org` domain.
    From Raphink

Network hardware recommendation for new small office going VOIP

We are moving into a new space and moving to a full VOIP environment. Up until now we have used standard land lines and used consumer routers (like the LinkSYS RV042).

In our new space we are getting faster Internet (100mbit) and going to 14 VOIP phones.

New space details:

  • 14 people
  • 14 LinkSYS SPA-942 VOIP phones
  • 100 mbit Internet connection
  • 18 mixed Windows, Mac and Linux desktops
  • Several internal servers (not public facing)

I'm wondering:

  1. Do you recommend the VOIP phones go into the same router as the rest of the network or two routers (1 for VOIP, 1 for all else)?
  2. What kind of router or routers do you recommend to handle all this traffic?
  3. With 14 VOIP phones do we need to do anything to prioritize traffic for the phones on the router, or is not not necessary with a small deployment?

Thanks!

    • What do you mean by router ?
    • What kind of switches do you use ?
    • Do you have a voip provider ?
    • How the voip phones are set up ?
    • Do you have budget to do it right or it has to be cheap ?
    Justin : - A router is a device that lets you create a network. See: http://en.wikipedia.org/wiki/Router - We have an un-managed Dell swich ATM, but we can buy new ones. - RingCentral - "How the voip phones are set up ?" Please explain this - "Do you have budget to do it right or it has to be cheap ?" Bit of a dubious question. We have the budget to build an efficient, secure and reliable network, yes.
    From cstamas
  • It'd almost certainly go for a Cisco Catalyst 3750V2-48PS. The reason is that it's a very capable L2/L3 switch with 'Power-over-Ethernet' that I know for a fact works great with that specific phone and plenty of people have the same combo. It's not the cheapest L3 switch out there but it's pretty damn powerful, small, takes a redundant power-supply if required and is reliable as you get for this kind of work. You may also wish to consider the upgraded, 1Gbps-capable, version the Cisco Catalyst 3750G-48PS, more expensive obviously but will deal with your PCs and servers far better than the 100Mb-only option.

    Feel free to come back to me with any follow-up questions ok.

    tomjedrz : I have had good luck with the HP ProCurve line. It is cheaper and has lifetime warranty.
    From Chopper3
  • There are two schools of thought as far as the network for the phones goes --

    1. Keep voice and data separated so that a failure of the primary network doesn't affect the phones. The phones gotta work, man, so they should have their own switches and have the voice network terminate in PBX that has it's own connection to the firewall.
    2. Make the whole network as reliable as possible. Why should the voice part of it be the only thing that's reliable? All ports should be able to provide POE and dial-tone VOIP QOS. End of story.

    I firmly believe that with modern equipment it is pretty easy to create a unified network that provides high levels of reliability for VOIP and standard data applications.

    For instance, with something like the Enterasys C3 switches you can create policies that give one class of device (based on 802.1x or MAC) super high QOS but only at most 300kbps of bandwidth. Everything else gets a lower QOS but is allowed as much bandwidth as needed. They do lots of other cool policy based stuff as well as the standard POE / L3 routing (static routes, ospf and rip).

    Just be sure you've got big UPSs on all equipment that's needed to support 911 / emergency calls.

    If you're going to go with unmanaged switches or switches that can't protect your network from things like broadcast storms or other DOS type things, you will want to chop your network into voice / data, but I believe doing so makes the network unnecessarily complex.

    Oh, and as far as your border goes, use a router such as a netscreen 5gt that can make sure that you have bandwidth and memory and ephemeral ports set aside for the VOIP traffic, if you're using a sip trunk service such as callcentric.

    From chris
  • I presume that you are connecting the phones to a VoIP phone system elsewhere, rather than to a local VoIP phone switch. If that is true, the phone provider will likely have an opinion. Be aware that their opinion serves their needs (reducing support calls), not necessarily your needs (ease and cost effectiveness).

    Do you recommend the VOIP phones go into the same router as the rest of the network or two routers (1 for VOIP, 1 for all else)?

    I recommend a single network. Set the phones at the desk, connect them to the main switch, and plug the PCs into the phones. A single 24 port switch will suffice, provided it has PoE. Confirm that the switch can provide maximum load to all of the ports at the same time! I have had good luck with HP ProCurve. Get redundant power supplies on the switch, and a good UPS with battery backup. Do the math to make sure you have several hours of battery life.

    Plug the internet router and the servers into the same switch. You will end up with a clean, easy to manage network that will perform well.

    Setting up two networks sounds cleaner, but it isn't. You end up with twice the hardware, which means twice the expense and triple the management hassle. You may need to setup VLANs and put the phones on a separate VLAN, but I doubt it will be necessary with 14 phones. You should be able to configure DHCP to allocate the phones and PCs to different pools if you are so inclined.

    What kind of router or routers do you recommend to handle all this traffic?

    I defer to others on this question, as I have only done local VoIP. That said, I don't think it will be that much bandwidth ( < 800 Kbps max). You do need to make sure that the router supports QoS, and ideally that you can get QoS all the way to the phone provider.

    With 14 VOIP phones do we need to do anything to prioritize traffic for the phones on the router, or is not not necessary with a small deployment?

    With 100Mbps to the internet and local Gigabit, you should be fine as long as you get sufficient bandwidth (~ 800Kbps) to the phone provider. Check with the phone provider to see if they require anything. They likely recommend Quality of Service (QoS), and may require it.

    If you start getting bandwidth-related problems, setup QoS. The telltale sign of bandwidth issues is the calls getting tinny. You setup the phones, the switch, and the internet router, and have the internet provider configure it.

    Marie Fischer : "Set the phones at the desk, connect them to the main switch, and plug the PCs into the phones." If you are setting up a new office and have enough cables & wall outlets, I would not use the phones' switches, but connect the PC's directly to the main switch. Just seems cleaner to me.
    tomjedrz : Then you need twice as many switch ports. For managed PoE switches, these are more expensive than the phones!
    Justin : Yes, but adding an abstraction layer between my PC and the Intertubes just slows it down, ever so slightly.
    From tomjedrz

Connection string during installation

Hi,

I've been convinced to use windows setup files(msi) for the installation of my new windows forms application after I asked a question here and got some excellent answers (thank you all): http://serverfault.com/questions/97039/net-application-deployment

Now i have a different question:

My application will need to access a SQL Server to provide users with data, which means that the connection string must be stored in the client's app.config file.

How should I handle this?

During installation, the user enters the connection string to that database? How they get the connection string? In an email from the admin? What if the admin wants to use SQL authentication and need to put the user info at the connection string?

So you know, the app will be sold via the internet, so I don't have any access to the admins or the network.

Any suggestions?

Thanks in advanced.

  • Consider that the connection information may change at any point in the application's lifespan. Because of this, installation time is not the best location for this to occur.

    Most commonly, you'll want to prompt the user for this information at startup if it is not already present, or if it is present but connection/login fails (with an appropriate error message). Upon successful login, store the information in the registry or whichever app config data solution you're using. You may also want to look at encrypting this data for security purposes, if you think that clients will be using per-user authentication.

    Marc : Hm. Perhaps an auto-config option, where the client enters the name of a server (provided by the admin), which in turn does nothing but provide the necessary credentials? That way, the only thing you're storing locally is the hostname of that server. This also gives flexibility in terms of changing DB server connection info (including expired passwords, etc).
    Marc : It sounds like you already have a server component for this, or is it just the database that resides on the server? If you do already have a server component, then could you modify that to include this service? It need not be as heavy as a web service - a simple socket listener/response could do it.
    Marc : One rather messy possibility would be store the info in a registry location; and require the client admins to push out registry updates to the individual users/groups via login scripts. Far from ideal... but I don't know that there are many other paths open to you. Either an authoritative source (server call), require user input at install or runtime, or this. There are really not many options to do something automatically when the values must be supplied by a human; and those values are not fixed for the life of the application.
    From Marc
  • Store the connection string in the app.config encrypted, you can use your own encryption or the configuration classes in .Net have encryption methods built in. Make sure you use something besides the default machine encryption so the admin could create a config file and use that on his/her local network install.

    Like Marc says check on start up if there is a valid connection if not prompt the user for connection method. The local admin can run the app create the config settings and deploy them from there.

    Ross

    From boezo

apache2: Require valid-user for everything except "special_page"

With Apache2 how may I require a valid user for every page except these special pages which can be seen by anybody?

Thanks in advance for your thoughts.


Update in response to comments; here is a working apache2 config:

<Directory /var/www/>
    Options Indexes FollowSymLinks MultiViews
    Order allow,deny
    allow from all
</Directory>

# require authentication for everything not specificly excepted
<Location / >
    AuthType Basic
    AuthName "whatever"
    AuthUserFile /etc/apache2/htpasswd
    Require valid-user
    AllowOverride all                       
</Location>

# allow standard apache icons to be used without auth (e.g. MultiViews)
<Location /icons>
    allow from all
    Satisfy Any
</Location>

# anyone can see pages in this tree
<Location /special_public_pages>
    allow from all
    Satisfy Any
</Location>
  • this should do a trick:

    <Location / >
     AuthType Basic
     AuthName "whatever"
     AuthUserFile /etc/apache2/htpasswd
     Require valid-user
     AllowOverride all                       
    </Location>
    
    <Location /yourspecial>
     allow from all
     Satisfy Any
    </Location>
    

    satisfy any is the cruicial one..

    matt wilkie : except `satisfy` is not valid for `` :(
    pQd : @matt wilkie - believe me or not - this is tested working config.
    matt wilkie : @pQd hmm, it appears in this instance the documentation is wrong. Thanks for the help
    matt wilkie : sorry, I have to retract the accepted answer status. I thought it was working but when I went to a different machine the "popup auth dialog first then show the special page" behaviour reasserted itself.
    pQd : did you checked what happens at http level [ with fiddler of wireshark ]? are you sure it's not a problem of missing / at the end of url?
    matt wilkie : I've updated the question to include full details of my current setup
    From pQd
  • How could these direction be adapted for .htaccess?

    How would a rule for a specific root FILE be adapted to .htaccess?

    joschi : Create a new question.
    From oh me

What are some good web based trouble ticket systems?

I'm looking for a trouble ticket system so that I can track issues for various customers. I used one before at my previous employer but it wasn't anything too great. I'm wondering which ones you would recommend and what other features that I might want to look at to make my job easier. It must posses the following.

  1. Web based
  2. Handle multiple clients (probably all support this)
  3. Asset management
  • Commercial Products:

    • Atlassian Jira (which ties in nicely with Confluence (the wiki)) - Surprisingly good for commercial software.

    Open Source:

    • Edgewall Trac - (Wiki, Ticketing, and more)
    • Drupal (+plugin)
    • Best Practical RT - I've heard good things about this, but have never used it myself
    Christopher Cashell : Request Tracker (RT) is a great choice. Jira works well enough as a bug tracker, but is far from ideal as a general purpose ticket tracker tool. Also, I didn't find any decent asset tracking support when I used it at my previous job. Confluence is an acceptable Wiki, although I don't care for their attempts at forcing it into a hierarchy. The tie-in between Jira and Confluence isn't bad.
    From Xerxes
  • I would suggest Request Tracker (RT) with the AssetTracker plugin.

    RT: http://bestpractical.com/rt/

    Plugin: http://wiki.bestpractical.com/view/AssetTracker

    hurfdurf : If you go this route, also use the RT-Extension-CommandByMail and TimeWorkedReport addins to simplify your life. It helps to have someone fluent in perl nearby to do the initial configuration.
    From dexedrine
  • Trellis Deskby Accord5 is open-source and freeware

    stukelly : Trellis Desk url: http://www.accord5.com/trellis
  • We use FogBugz for bug tracking, but it's expensive ($100 per developer) and depending on how you feel, the interface is so-so.

    We also use Trac for internal help desk ticketing; depending on how you feel, the interface is worse, but it's much cheaper. You have to run your own server with Python though.

    And you can also use 37Signals's HighRise, which I've heard some people have had some success turning into a helpdesk system. It'll keep track of all your correspondence and attachments. But using it will involve tweaking your notions of a help ticket system, which in some ways is a good thing.

    TonyUser : FogBugz also has "Students and Startup" version for free(2 users), but it's hard to find. Search for "Student" on their site.
    From
  • I used OTRS for some time. It's free and open-source.

    From Albic
  • For ticketing system I've used and recommend cerberus but I don't think it does asset management.

    www.cerberusweb.com

    From Eddy
  • OTRS is a little cumbersome, but I like it well enough.

    RT is the Commonly held best. It's very big and very configurable. Not really one for small jobs, but perfect if you need something really big and robust. It also integrates well with ldap, and that's a plus.

    Gnats is a granddaddy product that some people swear by. I find it to feel as old as it is. I wouldn't recommend it unless all the users were techies, since it's not pretty and not very user friendly. That being said, I hear it's quite easy to set up.

  • SmarterTicket

  • I have been using [AdventNet ServiceDesk plus][1] for years, love it. It's a all-in-one web-based product, including ticket system, asset management, knowledge base, purchase, contract management, etc. Very easy to install on both Linux and Windows platform. Straightforward to use as well.

    They actually have various products that you may find interesting. And best, most of their products have the free version that you can use free forever. Better check it out.

    Cheers, Kent

    [1]: as a new user, I can't post an url for the product (quite dumb IMHO), but you can easily Google it.

    From kentchen
  • well, I've been using a customised Mantis, and I'm very happy with it

    From Decio Lira
  • Redmine

    Redmine is a flexible project management web application. Written using Ruby on Rails framework, it is cross-platform and cross-database.

    Redmine is open source and released under the terms of the GNU General Public License v2 (GPL).

    From Karsten
  • For a hosted solution, you might want to check out zendesk.

    From jaaronfarr
  • We are using Lighthouse - lighthouseapp.com which is a hosted, commercial product. Rather simple to use, not overloaded with a million fields, good email integration, essentially free for open-source projects.

    From jcfischer
  • I use connexit, works rather well, though a bit pricy.

  • get a cms like joomla or drupal and add a ticket plugin, there is a bunch and most of them free so you should try at least a couple.

    From Terumo
  • GLPI is asset management software with a trouble ticket system.

    From TRS-80
  • Also have a look at Intervals. It is a great task / bug / issue tracking system.

  • If you are looking for a more IT related solution, as opposed to bug tracking, then I cannot recommend AutoTask enough. It has everything that you need and more, without being cumbersome to setup or maintain.

    Key Features:

    1. Entirely web-based, so you can manage your tickets from anywhere.
    2. Asset Tracking
    3. Knowledgebase
    4. Invoice directly from within app and/or export to Quickbooks (as well as exporting to XML if you want to convert to use it with another accounting program).
    5. Client Access so your customers can input tickets directly
    6. Resource management so you can schedule service calls for yourself and other employees.
    7. Outlook/Exchange integration for Caledering (scheduling services calls, etc)
    8. Project management features for items that are to large for single tickets.

    We've been using it for 2-3 years, and we'd be lost without it. The only issue we had was tax rates on invoices - the company is American, and we are in Canada and have different legal requirements for listing taxes. The support folks bent over backwards to come up with a fix for us, which was also great.

    From Skawt

Linux gzip multiple subdirectories into separate archives?

How would I be able to compress subdirectories into separate archives?

Example:

directory
 subdir1
 subdir2

Should create subdir1(.tar).gz and subdir2(.tar).gz

  • How about this: find * -maxdepth 0 -type d -exec tar czvf {}.tar.gz {} \; \;

    Explanation: You run a find on all items in the current directory. Maxdepth 0 makes it not recurse any lower than the arguments given. (In this case *, or all items in your current directory) The 'd' argument to type only matches directories. Then exec runs tar on whatever matches. ({} is replaced by the matching file)

    EarthMind : But then I still get the error that the given path is a directory and not a file
    Juliano : gzip alone doesn't archive directories
    Raphink : You need to tar directories before you can gzip them.
    freiheit : You need tar with the z option, not straight gzip for directories. Very nice usage of find, though.
  • This will create a file called blah.tar.gz for each file in a directory called blah.

    $ cd directory
    $ for dir in `ls`; do tar -cvzf ${dir}.tar.gz ${dir}; done
    

    If you've got more than simply directories in directory (i.e. files as well, as ls will return everything in the directory), then use this:

    $ cd directory
    $ for dir in `find . -maxdepth 1 -type d  | grep -v "^\.$" `; do tar -cvzf ${dir}.tar.gz ${dir}; done
    

    The grep -v excludes the current directory which will show up in the find command by default.

    Juliano : Both ls and find were NOT made to be used in this way. Specially ls, it is intended to present files in a user-readable list, not to generate file lists as arguments to 'for'. Just use `for dir in *`
    EarthMind : I've tried your first suggestion but it doesn't work with directories containing spaces in their name
    Raphink : Furthermore, the first command will also make a tarball for each file in the directory.
    From Phil
  • This small script seems to be your best option, given your requirements:

    cd directory
    for dir in */
    do
      base=$(basename "$dir")
      tar -czf "${base}.tar.gz" "$dir"
    done
    

    It properly handles directories with spaces in their names.

    EarthMind : It gives me this error: bash: -c: line 1: syntax error: unexpected end of file
    Juliano : @EarthMind: It works fine here. Check if the script was copied correctly.
    freiheit : @EarthMind: I'm not sure you spelled out the original question well, then. You want to run this again and get new .tar.gz files while leaving the prior .tar.gz files alone? Try adding "tmstamp=$(date '+%Y%m%d-%H%M')" and change ${base} to ${base}-${tmstamp}.
    Juliano : @EarthMind: if you are going to put everything in one line, make sure there is a semicolon (;) right before the tar command. Otherwise, base is passed as an environment variable to tar, instead of being a shell auxiliary variable.
    From Juliano

Small Business Server 2008 Standard vs Windows Server + Exchange

As far as I can tell, the only compelling reasons to get SBS are Exchange and Remote Web Workplace. Less interesting but useful features are Shared Fax and Backup. Most of the other "features" of SBS are free products like WSUS and WSS, or trialware (Forefront).

I'm playing with pricing here, and it looks like I can get Windows Server 2008 Standard x2 plus Exchange Server 2010 (or 2007) for only $1500 more than SBS 2008. (I only need Exchange CALs for a portion of the devices on the network)

I've running SBS since 4.0 and have always found it...annoying. For instance, I just read that in SBS 2008 we're limited to one network connection....which sounds fine unless you have a legitimate use for a second connection, like iSCSI, etc. Crap like this drives me up the wall.

So my question to you all is, what are the actual non-configuration-wizard features of SBS that I'm going to miss if I go with standalone Windows Server and Exchange products. So far I've got:

1) Remote web workplace 2) Shard Fax 3) SBS Backup (it's sorta neat, but I hear Server 2008 will incorporate it eventually?)

What else is unique to SBS that I'm overlooking here? Again, I'm only looking for usable features...not features that limit usability or supposedly make it easier to use.

  • Well from what I remember, Exchange CALs are included in your SBS 2008 CALs. And if you purchase the Premium edition of SBS 2008, you get a second Windows Server license so you can use it for either an edge firewall or anything else (SQL), except Exchange, since it must stay with the "PDC" (main SBS 2008 server.)

    I had this question myself, I went ahead and chose SBS 2008 over separate purchases of each "server" software. For us it ended up coming out cheaper with a bunch more little pros than cons, such as the ones you mentioned.

    I could be wrong, but you can use more than one NIC, however it isn't supported.

    http://blogs.technet.com/sbs/archive/2008/09/26/can-i-use-terminal-services-in-sbs-2008.aspx

    Multiple NICs in SBS 2008 Discussion

    Boden : Thanks for your comments. I'm considering virtualizing and if I do so I think I'm going to lose several of the smaller benefits of SBS, such as VSS backups and shared fax. The premium version of SBS is very expensive if you've no need for SQL Server...it's cheaper to just buy another Windows 2008 license.
    Evan Anderson : There are no "PDC" computers in Active Directory. All copies of an Active Directory database hosted by DCs in the same AD domain are equal. The SBS Server computer must hold the two forest-wide and three domain-wide FSMO roles, as well as being a global catalog server. One of those domain-wide FSMO roles is "PDC Emulator", but that doesn't make the copy of AD on the SBS Server some kind of special "primary" copy.
    AdminAlive : Correct and I am well aware of how Active Directory works. What I meant was misunderstood, PDCs were NT days, thus the "s. My apologies for using the term too loosely.
    From AdminAlive

VMWare Server flakiness on Ubuntu 9.10

Note: I looked into ESXi, but it does not support my CPU.

I have encountered the following error on two machines:

  1. Ubuntu Desktop 9.10 32-bit version, hardware is a 64-bit AMD AthlonX2 CPU and 1 GB RAM
  2. Ubuntu Server 9.10 64-bit version, hardware is a 64-bit AMD Phenom CPU and 3 GB RAM

I started at the Ubuntu wiki, which led to this blog post with an installer shell script that applied VMWare's patches for the 2.6.31-14-server kernel.

After a couple hiccups, I got things installed and was able to sign into the web admin on port 8333.

Attempting to stop VMWare using

sudo /etc/init.d/vmware stop

saw a failure on the Virtual Network. Attempting to start it:

sudo /etc/init.d/vmware start VMware Server is installed, but it has not been (correctly) configured for the running kernel. To (re-)configure it, invoke the following command: /usr/bin/vmware-config.pl.

Running the configuration says:

sudo /usr/bin/vmware-config.pl The following VMware kernel modules have been found on your system that were not installed by the VMware Installer. Please remove them then run this installer again.

vmmon vmci vmnet

I.e. - 'rm /lib/modules/2.6.31-14-server/misc/.{o,ko}'

Execution aborted.

Following these instructions sometimes gives a working system, sometimes it take a couple tries. The server will work for a little bit, then throw the same error.

Has anyone got a stable VMWare Server setup on a Ubuntu Server 9.10 host? If so, how?

  • Unfortunately no solution found yet and do have exactly the same issue.

    It starts to look like these problems are showing up when:

    • Ubuntu 9.10
    • 64 Bit (hw/os?)
    • Vmware Server 2.0.2

    My setup is 2.6.31-16-server #53-Ubuntu SMP Tue Dec 8 05:08:02 UTC 2009 x86_64 GNU/Linux, running on a quad core Xeon (IBM x3650).

    hewhocutsdown : This didn't work for me, but it seems to for others: http://radu.cotescu.com/2009/10/30/how-to-install-vmware-server-2-0-x-on-ubuntu-9-10-karmic-koala/
  • I've redesigned the script with a better patch and now VMware Server seems more stable. Please test it again. Also now I've added support for Fedora and openSUSE.

    hewhocutsdown : We'll give it a try, thank you.
  • Is there any reason why you must run Ubuntu 9.10 as the VMWare Server host OS?

    I have been reliably running VMWare Server (1 & 2) with Ubuntu Server 8.04LTS for years.

    It is supported by VMWare and installation is a lot simpler. i.e. Install gcc and your kernel headers and you are good to go.

    I have documented the few optimisations I put in place at the OS and VMWare level below. From experience I have found this results in more consistent performance across the VMs:

    And here is a quick rundown on how to enable USB support on Ubuntu 8.04 with VMMWare Server 2:

    p.s. I have setup hypervisors running Ubuntu 8.04 32bit and 64bit without any problems.

    hewhocutsdown : We ended up using VMWare Player on Ubuntu 8.04 64-bit in the end; haven't had a chance to test Radu's new script, hopefully in a week or two. Thanks for the resources.