Teaser: Revamping Zentyal Cloud UI

Communication, Development  Tagged , , , No Comments »

When the latest updates in the Zentyal Cloud User Interface were released, we thought it was fair enough for the time being, but we all had in our minds that a total change was going to be necessary along the time. We wanted to improve the usability of the interface: make it easier to generate a new report, make it possible to view your Zentyal server farm at first glance…

With this new look, we have made an important change. The workflow has changed to a concept called “context“. A context is an environment you are currently working on. Now Zentyal Cloud has two kinds of contexts, Zentyal Servers and groups of Zentyal Servers. Every feature has turned into context-based feature; configuration backup, alerts, jobs, monitoring, software updates, reporting or our add-ons (Disaster Recovery and Advanced Security Updates) all show different information depending on the context you are in. For instance, you can see alerts from a server and then from another, or you can see the monitoring graphs for the group of your servers and afterwards take a detailed look at one of the servers. The context switching is made gracefully with a slide and search button. So quite a few things have changed, but everything is up there for sure.

A new shiny dashboard is also available, and not only one, but each server and server group has got their own! Now you can see the severe alerts, the connected servers, the latest support issues and a graph of aggregated load for the servers in the group. Moreover, the server dashboard may also show the module status, the remote access or the load graph from the last hour.

Additionally, the Zentyal Cloud settings section has grown and every configurable feature has been moved here. For instance, the group, job or report management are now more visible in this part of the application.

Finally, we have worked very hard on monitoring feature to be more intuitive and powerful in the Zentyal Cloud side. We have switched our plotting library to flot and we have added the feature of querying custom period of times to check what happened yesterday from 1pm to 2:34pm.

But I have to stop talking and let you take a look on the new interface that will be made available later this week (remember that you can get a free Basic Subscription to see Zentyal Cloud in action):

[nggallery id=1]

This is the direction we have taken to improve the usability of Zentyal Cloud and we will bring on improving it with your feedback as well as including new features to make Zentyal Cloud even more useful and easy to use.

To sum up, the context is the key of the new look & feel. Do you think this is more usable and useful? We do :).

A post by Enrique Hernandez

Deploying Django applications using Zentyal Web Server

Development 1 Comment »

Zentyal Server offers some features that allow it to be used as a web server of not only static content, but also of different kinds of rich applications. In this post I’ll explain step by step how to publish a django application using its own domain.

As you may know, Django is an open source web framework designed to make easier the development of complex, database-driven websites, with emphasis in high modularity, automatization and the use of widely known design patterns such as MVC. It’s also distributed with very useful plugins that offer implementations of common features such as basic authentication and a CRUD interface.

In this tutorial we’ll make use of the wsgi interface to deploy the application.

To start with, you should have:

  • A Zentyal Server, with Web Server module installed.
  • A Django application to deploy, e.g. exampleapp
  • A domain, e.g. example.com

So, let’s start!

Creating the virtual host

The first step is to create a virtual host in the Zentyal Server: a virtual host is used to give a different configuration for a specific location in a physical server that serves different domains from the same IP.

Now go to your Zentyal Server web interface and click on Web Server, if you have the module installed, you’ll find it in the Infrastucture section. If not, install it!

Once you are in the Web Server configuration screen, click on Add New…, you’ll see that basically you only need to specify the domain of your virtual host to create it. You can also select if you want to enable the virtual host (you do) and if you want to enable SSL support, for that you’d also need to create a certificate first, you can check the Web Server module documentation for more information.

Now just enter your domain in the Name field and click on Add. Save changes and that’s it! If your domain is already configured to resolve to the IP address of your Zentyal Server you can try to visit your new web site. It will currently show only an empty directory, so let’s fill it with your application.

Preparing the Django application

If your application already has a wsgi (Web Server Gateway Interface) file, skip this step, if not, open a text editor and write:

import os
import os.path
import sys

#sys.path.append(os.path.realpath(os.path.dirname(__file__)))
#sys.path.append('/srv/www/example.com')

os.environ['DJANGO_SETTINGS_MODULE'] = 'exampleapp.settings'

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()

Not everything is always needed here, but there are three mandatory lines:

os.environ['DJANGO_SETTINGS_MODULE'] = 'exampleapp.settings'

Tells the Django wgsi handler which one is the settings module of the application to serve.

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()

Instantiates the wsgi handler that will handle the interactions between the web server and the Django application.

The other lines can be used to add the path where your application is installed to the python path: the first one considers that the application is installed in the same directory as the wsgi file itself, the second one locates the application in the directory created by Zentyal for the virtual host. If your application is installed in any of the default paths, you don’t need any of these lines. You can also put any other valid python code there that your application might need.

Save your file where you want, e.g. in /srv/www/example.com/example.wsgi.

Configure your web server

And the two final steps are to check that your web server has all the needed dependencies and to add the last configuration.

The dependencies are the Python and WSGI modules for apache and, of course, Django. To install these dependencies and enable the modules run the commands:

sudo apt-get install libapache2-mod-python libapache2-mod-wsgi python-django
sudo a2enmod python
sudo a2enmod wsgi
sudo /etc/init.d/apache2 reload

Finally, you can add any configuration to your virtual host by adding files to /etc/apache2/sites-available/user-ebox-example.com. For this example let’s map a path in the url to the wsgi handler. To do this you can use the WSGIScriptAlias apache directive and the wsgi file, so open again your favorite editor and type:

WSGIScriptAlias / /srv/www/example.com/django.wsgi

This indicates that when a request asks for root in this virtual host, it will be handled with the indicated file. Change the path if your wsgi file is in another place. After that, save it in /etc/apache2/sites-available/user-ebox-example.com/django, reload apache and your application will be deployed.

Summary

  • Add a virtual host
  • Prepare a wgsi file
  • Install dependencies (libapache2-mod-python, libapache2-mod-wsgi and python-django at least)
  • Add the WSGIScriptAlias directive to your virtual host configuration

As you can see, it’s really easy to deploy complex web applications in your own domain using Zentyal, why not try?

A post by Jaime Soriano

Use of RAM disks to improve performance of caches

Development No Comments »

In the Zentyal project we automatize tests with ANSTE. This tool is able to deploy complex scenarios with several virtual machines installed during the process. As you can imagine, the creation of lots of different images is a heavy task. To speed it up we make use of RAM disks, a feature offered by Linux and other operative systems to create filesystems directly over RAM, that are tens of times faster that physical disks.

Our testing machine, a quad-core computer, was blessed since it’s very first day with 16GB of RAM to be able to deploy a very broad spectrum of scenarios with virtual machines, routers, networks and so, the type of complex scenarios in which Zentyal servers are involved every day. We dedicated 12 of its 16GB to RAM disk, for that we used tmpfs, a file system that can be directly mounted in any Linux in any moment, you only need to specify the size during mount time, the module reserves the space and includes its own file system, avoiding the need to format the virtual disk. All the configuration needed to use it, is to add this line to /etc/fstab:

tmpfs           /tmp    tmpfs   size=12G,nr_inodes=120k 0       0

But we had a problem with this system, from time to time we had to reboot the machine because the RAM disk seemed to be full, although it wasn’t.

After some time we bought a second testing machine, with very similar specs, only to increase the availability of this valuable resource so our developers can use the free time of the new machine to test in complex scenarios while programming new features or to quickly mimic the cases involved in reported issues.

In this machine we decided to test raw RAM disks. The main differences between these disks and the ones created with tmpfs is that they don’t include any filesystem, (so you need to format them before mount) and that they don’t check if there is enough space in RAM for them, so if you are not aware, you can take all the space (by the kernel) blocking the machine.

The first thing you need to do to use this technology is to add a parameter to the kernel that specifies what is the size of the disks. In Ubuntu Server, or in any other Linux system with grub 2.0 you can do that by editing /etc/default/grub adding the option ramdisk_size=12000000 to the variable GRUB_CMDLINE_LINUX:

GRUB_CMDLINE_LINUX="ramdisk_size=12000000"

Then you need to format the disk and mount it during boot time, for that we have implemented a simple upstart task (e.g. /etc/init/mounttmp.conf):

description     "Mount tmp filesystems on boot"

start on startup

expect daemon

task

script
    mke2fs -m 0 /dev/ram0
    mount /dev/ram0 /tmp
    chmod a+rwxt /tmp
end script

As you can see, we format the virtual disk with ext2 file system. We chose this option because it’s a very well known one, but especially because it doesn’t have journaling, a very cool feature to avoid disasters when you have precious data in your physical disks, but totally useless when you are caching files in temporal storage.

So far we haven’t had any problem with RAM disks and both options have very similar performance, so we have finally discarded tmpfs and configured our first testing server to use RAM disk also.

Although we use it for testing, this is only an example of what can be done with temporal RAM disks. You can really apply this solution to speed up almost any application that makes heavy use of cached data stored in physical filesystems without any single change in the implementation, and, as a last note, there are also very interesting uses of RAM disks with union filesystemsbut that is another story.

A post by Jaime Soriano

Dynamic DNS with Zentyal (Community approach)

Communication, Development  Tagged , , , 9 Comments »

DHCP and DNS infrastructure services are quite good for working
together. So Zentyal whose aim is to provide a tightly integrated
product had to make it work together.

Its cooperation is based on DHCP server asking to add the DHCP clients
in DNS zones. For instance, your client called foo asks for a IP
address lease to connect to your local network. Once the lease is
done, the DHCP server ask to DNS server to add foo client with the
given IP address to the configured dynamic DNS zone, for example,
example.lan.

Therefore, if you ask for foo.example.lan then the leased IP address
is returned, and reverse resolution also returns as expected.

The above explanation was the simplest and easiest to understand by
Zentyal community as these posts reminds me. But I thought the right
way to do it is to set the hostname by the sysadmin. That is, if your
host is a dynamic one and it is not a server, then a dynamic name is
set such as dyn-192.168.0.1.example.lan . Furthermore, due to time
constraints I was not able to finish a complete dynamic DNS + DHCP feature in
the past and reverse resolution was not implemented. That was odd for
Zentyal users although it is stated clearly in the documentation.

So, we decided to fix these things when a slot of time was available, so
I was able to finish the proper implementation with this shiny
features:

  • Set the hostname based on DHCP client data when the client belongs to a DHCP range
  • Reverse resolution is working for DHCP client names
  • Mix static and dynamic defined DNS zones

The latter feature is a very neat one. This allows you as Zentyal
administrator to have a DHCP server assigning names and IP addresses
dynamically, as the hosts requests and releases demands to, and you as
system administrator are able to manage the zone to add different
resource records to that zone, such as, specific host names, alias,
name servers or mail exchangers.

In order to do that, I used nsupdate tool to dynamically set the
manual settings in a dynamic zone allowing us not to corrupt the
journal file by named daemon and be as kind as possible with bind.

List of DNS domains

Dynamic and static DNS domains

To sum up, to maximise this new feature you firstly must create a domain in
DNS section and then select it in the Dynamic DNS options from DHCP
section. Once this is done, save changes. Then, every new client
request will update your DNS zones to give direct and reverse
resolution for this name – IP address pair.

Dynamic DNS options in DHCP module

Dynamic DNS options in DHCP module

This feature is available in 2.X package series of Zentyal server. Namely, ebox-dns 2.0.3 and ebox-dhcp 2.0.4 are the packages with this feature.

Listening to your community requirements usually make your product better! :)

Best regards and thanks a million for your feedback on this great feature!

Edit: I have updated the post to include the definite version of the packages.

A post by Enrique Hernandez

Migrating mail to Zarafa using IMAP

Development 3 Comments »

The Zentyal 2.0 release included a new groupware module to support the Zarafa Collaboration Platform. This software can be used as a unified communications platform, it includes a lot of features such as calendar, contacts, tasks, mail gateways and much more. It supports many clients, such as Microsoft Outlook 2007, web browsers and the ones that are compliant with protocols like ActiveSync, Blackberry Enterprise Server, POP3/IMAP and iCal/CalDAV.

For storing the mail, Zarafa has its own database, so if you want to migrate all the messages of an account from a standard mail server to Zarafa, the best way is to use the IMAP protocol, that allows to read all the messages in the original server and copy or move them to the new server. Although you can probably use your mail client to manually copy all your messages from one account to the other one, some tools exist to automatize this process. In this article I’m going to explain how to use imapsync, one of these tools, to migrate a mail account.

imapsync can be installed in a debian based system using apt-get:

sudo apt-get install imapsync

After installing it we need to know, at least, the following information to connect with our mail accounts:

  • Authentication mechanism or protocols used to connect with the server. They are usually plain, SSL or TLS.
  • Hostname of the server that contains the account
  • User name
  • User password

And you will also probably need information about how the messages are stored in the server:

  • Path separator, that can be different between servers. It usually is “.” or “/”.
  • Path prefix. Prefix added to any path in the server.
  • Header information particularities. Important to compare messages and avoid repeated ones.

If you are using Zentyal and Zarafa, the separator is “/” and there is no prefix. You’ll need to take this into account when configuring the destination server during the migration.

imapsync tries to avoid duplicate copies of the messages and for that reason, by default, it compares all the headers of copied messages with the existing ones in the destination account. This can cause problems with servers with custom header information because they will be always different, as is the case of Zarafa. To solve this, you can specify the way to compare the messages. The most common solution is to use only some header fields, e.g. for this migration we will only use Message-ID, that is usually unique.

Once you know all this, you can start to use the tool. Let’s see the needed parameters (the parameters ending with 1 are for the origin server and the ones ending with 2 are for the destination server):

  • –noauthmd5, to not to use md5 authentications.
  • –ssl1 and –ssl2, to use SSL connections.
  • –host1 and –host2, hostname.
  • –user1 and –user2, user name.
  • –password1 and –password2, to specify the passwords in the command line. This is only useful if you are going to make scripts to migrate lots of accounts, if you are going to migrate only one, it’s safer to let imapsync ask for the passwords if needed.
  • –sep1 and –sep2, path separator.
  • –prefix1 and –prefix2, path prefix.
  • –skipheader, regular expression of headers to skip when checking the duplicated messages.

At the end, the command result in something like this:

imapsync --noauthmd5 \
    --ssl1 \
    --host1 origin.example.com \
    --user1 jsoriano@origin.example.com \
    --ssl2 \
    --host2 mail.example.com \
    --user2 jsoriano \
    --sep2 / \
    --prefix2 '' \
    --skipheader '^(?!Message-ID)'

Notice that the format of the user account depends on the server, e.g. Zentyal uses the complete mail address in the mail module, while Zarafa uses only the user name. Notice also how to specify that all the headers that are not Message-ID have to be skipped by using the –skipheader argument and regular expressions.

If your connection with the servers is not very good and you keep disconnecting, you can run the command in a loop and add the –skipsize argument to avoid the initial size checks:

while ! imapsync --skipsize [...]; do true; done

And finally, if you want to copy only some directories, you can use the argument –folderrec followed by the base path.

Once all your messages have been migrated, you can also configure your mail client to use the same folders as Zarafa, e.g., in Thunderbird you can specify the following paths:

  • Move deleted messages to Deleted Items
  • When a message is sent, put a copy on Sent Items
  • Move spam to Junk E-mail

With this instructions you should have all that you need to migrate your IMAP account to a Zarafa server.

A post by Jaime Soriano

Zentyal Disaster Recovery

Communication, Development  Tagged , , 1 Comment »

Hi there people!

It’s been a lot time since the last time I wrote in a blog (2 years and a half ago exactly). Somehow it always happens you almost forgot you had a blog to write stories or even something interesting. However, in Zentyal there was a proposal to spread out the word both internally and externally about the stuff we do, and I was there to give a Zen Chat and I’m here again to talk about a brand new service we offer: the Disaster Recovery.

Have you ever found yourself in a situation where you have lost everything in your server because of power outage or simply the server hard disk collapse? Or even having configured the backup properly, you have lost hours or even days setting up everything again? Here it comes Disaster Recovery to help you to reduce your downtime. With this service, your data will be safely stored in a remote and redundant location with a pretty straightforward configuration.

Disaster Recovery backup configuration form

Disaster Recovery backup configuration

Available backup domains

Available data to back up

But the main advantage of this service is its ease to recover your server from a disaster. Supposing you have lost your server hard disk but thanks to $DEITY you have found one empty in your desk. In that moment, your recovering procedure is as simple as follow these simple steps:

  1. Insert Zentyal Server ISO and enter in Disaster Recovery mode
  2. Configure your new hard disk and minimal network settings
  3. Enter your user/password for Disaster Recovery service
  4. Select your server and date
  5. Have a cup of tea and watch how progress bars work for you

Restoring process progress bar

Restoring process

In order to make this feature works seamlessly in Zentyal, we had to work really hard on making restoring process rock solid. This have required the total implication from the Zentyal Development Team since
it almost covers the whole code base in Zentyal server. Not only making functional tests but also testing configuration automatically using our homemade tool, ANSTE.We have found rough edges with our
data back-end, get into the guts of Perl IPC or discovering how UTF-8 is really beautiful, but poorly integrated in some applications.

It has been a very tough time, but we think this feature deserves so.

PS: You can try the configuration backup if your server has free Basic Subscription. The full Disaster Recovery service is only available for servers with Professional or Enterprise Server Subscriptions.

A post by Enrique Hernandez

Zen Chat: Top 5 advices for using git

Development 1 Comment »

As some of you might know, in November we did our first session of Zen Chats, a meeting in which some members of the Zentyal Team show something of his or her interest to his or her fellow co-workers. For the first session we had three talks, Nacho spoke us about the company from the strategic point of view, Quique explained in detail the architecture of Zentyal Cloud and finally I tried to get my mates love git a little more.

My talk was titled Top 5 advices for using git and it was organized in five blocks of useful information more or less known about this version control system that some of the Zentyal developers frequently use. Below you can find a summary of the presentation that I wanted to share with you in case you find it useful!

1. Some things about git

Git is a distributed version control system – It means that every node involved with a hosted project can act as a client and as a server. This implies that in git “to clone” is really “to clone“, meaning that when you clone a source repository you are copying everything from its location, including all the history, so, after cloning, a local repository contains all the needed information to be a server. This distributed model also allows to have different origins for different branches and also to include other git projects as subprojects. Going deeper, a git repository is composed by all the raw data with it’s meta-information and a representation of some state of the files. It’s totally opposed to a centralized repository where you have the data and meta information in the server side and the representation of the files and maybe also some meta information in the client side. Anyway, a repository that is going to act only as a sever doesn’t need to check out the files, they are called “bare” repositories.

Git is built over a very simple object model in which only four classes are defined: commit, tree, blob and tag. This model allows to have simpler information and better optimizations.

Another important point about git is that when committing a version of the files, it stores complete snapshots of the staged files, not only its deltas.

2. Beware of the configuration

Git includes a configuration subsystem that allows to control any kind of variables, from pure user settings as name and e-mail to more specific information as repository origins. This configuration subsystem can be basically used with the following commands:

  • To list all the values, git config [--global] -l
  • To read a variable, git [--global] section.variable
  • To set the value of a variable, git config [--global] section.variable value
  • To remove a variable, git config [--global] --unset section.variable

Notice that the configuration can be global to all the repositories of the user or local to one of them, the global configuration is checked by default if the variable is not set in the local configuration or if it’s indicated with the --global flag.

Something important is to set who is the user doing the changes, this can be done by setting the variables user.name and user.email or using environment variables. With the second option you can also specify if an author is different to the committer. The environment variables used for doing this are GIT_AUTHOR_NAME, GIT_AUTHOR_EMAIL, GIT_COMMITTER_NAME and GIT_COMMITTER_EMAIL.

A useful feature implemented with this configuration system is the aliases, that allow to define custom git commands based on built-in commands or on system commands, e.g, to define an alias to serve the current repository with git serve we can do this:

git config [--global] alias.serve !git daemon --reuseaddr --verbose --base-path=. --export-all ./.git

And don’t forget the extremely useful .gitignore. In this file you can write a list of patterns and git will ignore all the files that match with any of them.

3. History repeats itself (or not…)

One of the things that probably takes your attention when you start working with git is the “ugly” commit identifier. If you’re used to more traditional version control systems, like CVS or Subversion, you’ll be used to simple, sequential numbers that identify each commit in the order that they were committed to the central server. As you can suppose, this mechanism is not valid in a distributed system because the commits are not necessarily committed to the same repository and they cannot be chronologically ordered. So, the chosen solution was to store the commits in Directed Acyclic Graphs (DAG) and identify them by a unique checksum. This mechanism is not only used for commits in git, but it’s also used for every kind of object. It’s also useful to avoid repeated objects, e.g. if you have two identical files (represented as blobs) in the repository, these files will have the same identifier and they won’t be saved twice.

But this so-convenient-identifier-system is not so convenient when humans have to interact with it, and for this reason, some helper features were added to git:

  • Abbreviations: When you have to refer to a commit, you only need to write some of its first characters – git will find the nearest one starting with these characters.
  • Tags: As with other version control systems, with git you can tag a specific commit with a more human identifier, a branch could also be considered a specific way of tagging, but also forking the history.
  • Symbols: There are some symbols and operators that can be used to refer to specific commits or intervals of commits, e.g. HEAD will always refer to the last commit in the current branch, HEAD^ to the previous one, HEAD^^ to the previous of the previous one, and for older commits you can use tilde and concrete numbers as HEAD~5 to refer to the fifth commit before HEAD.

And there are some commands to help the user with these identifiers, with git log you can see a chronologically-inverse ordered list of commits with their identifiers, git show can be used to print the contents of a specific commit or file in the commit. There are also commands to graphically see the contents of the repository, as gitk that is a complete desktop application to navigate along the commits or git instaweb that shows this information through a web server deployed by git itself.

You can also navigate across the repository history with blame, reflog, fsck, ls-tree or blame.

Or change the history of the repository with these commands:

  • git commit --amend, a very useful one, allows you to correct mistakes in the last commit.
  • git reset changes the HEAD to a specific commit and optionally resets the index and the local copy to match this change.
  • git rebase is a powerful command that allows to rewrite the index, i.e. to merge several commits in only one, to edit or completely remove them, or to change their order.

In any case, don’t change the history if you have shared your changes, as you could create serious conflicts with the people that already downloaded them – Think that with great power comes great responsibility, so, although you can force git to do almost everything, think about what you’re doing when playing with history if you are working collaboratively.

4. Let’s Get to Work!

Until now I haven’t talked about how to do any real work that everyone does with any version control system. In git, as with other tools, the most used commands are the ones to do commits, to retrieve the commits done by other people and to see the status of our current work.

First of all, the most commonly used commands to check the current status of your work are git diff and git status. I mention them first because you should use them always before doing any other thing. git diff can be used to see differences between commits, but it’s usually executed without parameters to see the differences between the working copy and HEAD – It’s very useful to know what you are going to commit before doing it. You can also see the current status of your working copy with git status, that shows the list of files with a special status. The most common status are modified, staged or untracked. A modified file, as its name says, is a file that you have modified; before committing your changes you have to stage the files that you have modified, so a staged file is a file marked to be committed, and finally, an untracked file is a file that has never been committed or staged. An important point of git status is that, as other git commands, it is used to provide very useful information about the state of the repository in its output.

To manually stage a file, you can use the command git add, but the files can also be automatically staged when committing. The command git commit is used to finally commit the changes to the local repository: without parameters it will commit all the staged files and with the -a flag it will automatically stage all the modified files before committing. You can also specify the files to stage and commit adding them as parameters. Directories used as parameters of add or commit are recursively staged.

Finally, you need commands to share your changes with other repositories or to retrieve the changes shared by other developers. As mentioned before, you need to clone a repository to work with it and this can be done with the command git clone. Once the repository is cloned, you can use git pull to update your local copy with the changes committed in the remote origin. This command is really doing two things: first it downloads all the commits that you don’t have in your index (with git fetch) and then it merges everything (with git merge). If during the merge git detects that you have modified something that was also modified in the remote repository, it will mark this part to be a conflict and you’ll need to solve it by hand.

If you want to share your changes, you’ll need to use git push, that sends all the commits locally created to a remote repository. It’s important to have the repository updated with git pull and correctly merged before pushing as the remote repository doesn’t have any merging logic, so it will only add the new objects to its index.

A special mention to git stash, a command that allows to push unstaged changes in a temporal stack, so that you can do other tasks as merges that you cannot do with a dirty workspace. When you finish these tasks you can git stash pop to apply again the changes pushed into the stack in your working copy.

5. Branches

I’m not going to deeply describe the branching subsystem of git, I’m simply going to list some of its great advantages:

  • It’s almost free, creating a local branch is almost instantaneous as it only creates a couple of new objects, no file copying, no remote connections.
  • It allows to easily and safely group commits.
  • It allows to maintain in a separate codebase a modification that was growing unexpectedly.
  • It allows to quickly and safely manage monstrous merges.

A whole new world to explore, do it!

Bonus track

  • Workflow cheatsheet: pull, add, diff, status, add, commit, [pull], push
  • Tip: Don’t touch the index without committing, staging or stashing before (nothing red in the status).
  • Tip: Read everything git tells you to read, specially with status and rebase, if you want to avoid disasters.

A post by Jaime Soriano


WordPress Theme & Icons by N.Design Studio. WPMU Theme pack by WPMU-DEV.
Entries RSS Comments RSS Log in