Thursday, September 27, 2007

Volume Shadow Copy Service and Windows Vista

(This is an old post I posted in my old Live!Space Blog. Now just collect it here:-D )

玩 了一会儿Vista以后发现了一件事情,在Vista中微软把VSS(Volume Shadow Copy Service)作为默认打开。在Vista中如果你右键点击一个文件或者盘符,选择属性,你会看到属性中有一页叫做"previous versions”。对,这就是VSS提供的服务。
如果你不了解VSS的话,可以阅读“What is Volume Shadow Service" 这篇简短的technical article。简单的技术讲法,VSS提供了以block device方式实现的file level protection,同时很好的保持了文件的intergrity。(似乎还是太难看懂,呵呵) VSS通过定时的对你的disk volume拍摄快照来保存当时时刻磁盘上的文件的拷贝。而且,与一般备份软件不同的是,VSS做整个快照操作时,通过一套机制来 控制对这个volume有读写操作的软件(例如安装在这个volume上的SQL Server数据库)采取一定的自我保护措施,来防止快照过程中文件被修改,同时也防止有文件正在一个读写操作的半当中(这种情况下的文件内容是不完整的,也没有备份的意义)。VSS还会控制这些软件把快照当时的一些相关信息一同记录入快照。这样,当我 们要把一个文件退回到一个previous verision,读写这个文件软件就不会吃惊地发现有个文件突然返老还童了。
VSS出现于Windows Server 2003中,但是在Win2003中VSS并不是默认采用的。目前,我还没有发现,在Vista中如何schedule VSS的snapshot。在Win2003中,你可以在disk manager中右件点击一个volume,然后在属性的volume shadow copy页面中设置VSS schedule。但是,在Win2003中,VSS不支持snapshot on change。也就是说,VSS只能按照事先设置的时间间隔在给定的时间点上对volume快照,而不能在volume内容发生改变的时候自动的快照。我 实验了Vista中VSS的时间间隔,发现不是等时间间隔,而更像是snapshot on change。如果是这样,VSS的实用性有了本质的提高。VSS在一开始就采用了copy on write,所以实现这个功能是不太困难的事情。事实上,在VSS的文档中指出了,VSS没有采用snapshot on change的原因是基于性能上的考虑。具体的细节等我研究Vista中VSS的相关文档以后再一并介绍和讨论。
最后一点,现在我还不清楚VSS会在Vista的哪些版本中提供。希望能够至少出现于Premium以上的版本中。应该说,VSS对消费用户的吸引力应该也是不小的。

Update on 09/27/2007: 另外一件事情我没有发现的是如何为每个volume单独指定是否打开VSS service。当然也没有发现如何开启关闭VSS。当然了,通过service manager直接shutdown VSS service是可行的,但这样也就对所有的volume关闭了VSS服务。

C++ static member variables

(This is an old post I posted in my old Live!Space Blog. Now just collect it here:-D )


老革命遇到新问题,呵呵。竟然今天(其实是一年多以前,呵呵)在这个topic上栽了一个小跟头。
See a class definition below:
class A
{
public:
static A* GetInstance();
ReleaseInstance();
private:
A();
virtual ~A();
static A* m_cInstance;
}
This is a typical definition of a singlton pattern. Is there any problem with this piece of code? When you compile it, you get nothing wrong. But when you actually access the class, you will get this:
error LNK2001: unresolved external symbol "private: static class A* A::m_cInstance" (?m_cInstance@A@xxxxxxx@xxx)
What's wrong here? Honestly, I haven't been using C++ for a while and actually I almost never use the singlton pattern in C++. I can say I'm expert in C or C#, but not really an expert in C++.
Well, to my suprise, lots of people got the same problem so the answer is found in minutes:
In C++, when you define a static member variable in a class, you are not actually define the variable, instead, you only give the symbolic definition of the class TYPE. What you need to do, is, you have to ACTUALLY DEFINE THE VARIABLE AGAIN OUTSIDE THE CLASS DEFINITION.
Well, so what you want to do to make it work is adding the following line at the beginning of A.cpp:
A* A::m_cInstance;
We know static variable in a class is actually not part of a class instance: it is a global variable, shared among all instance of that class type, and can also be access directly by the class type name. So, in C++ the language ask you to explictly define it as a global variable.
What a stupid design! I can't believe the C++ committee made such a stupid decision. There's absolutely no difficulty to let the compiler deal with the extra line with no attention from programmer. Why they don't do this?
Again, I have to say, C++ is really a gabage language. It is too old, it is out-dated. Modern languages like C# and Java are far better than C++ in every aspects, expect the performance. Well, in modern software industry, development productivity and program volunarity are far more important than running speed in most cases. When come to performance, why not comes a language that is as C# but compiled as native code and with no gabage collection? Is there a technical reason that C++ should survive(Yes, I know, keep the value of existing investments and .... job security:P)

Friday, September 21, 2007

stricmp() in Linux

OK. This is a short one. I would say I'm still pretty new to be a real fulltime Linux developer, so this is the first time I met this problem.

I've been using stricmp() for a long long time but this is the first time I found there's no stricmp() in Linux. Instead, Linux has a similar function called strcasecmp() (also a corresponding strncasecmp() ). Besides that, these two functions are not defined in string.h, instead, they reside in a separate header file called strings.h. That's it.

This is really annoying especially when you are considering portability.

Friday, September 14, 2007

Bad multi-core scheduling in Windows

This is really bad, at least for Windows Vista. Since my machine has been setup and getting more and more stable, I'm starting to play with the four cores. Oh, yeah~~! I spent $300, would I be happy to see some exciting performance? Hmm.....

The story is always beautiful... Look at this experiment I did last night:

Keep your task manager open and switch to the performance page.

First, I run TAT and started core 0 and core 1. OK, then I see core 0 and 1 are now at 100% load while core 2 and 3 are close to idle, mainly around 5% usage at most. The TAT program starts two processes, named MeromMaxPowerVerOp3.exe. Now I checked the affinity of these two processes, one is with core 0 only and the other is core 1 only. That's expected, that's good.

Then, I started Orthos which suppose to start two head-duty threads. But what I found in the task manager is, this didn't give core 2 and 3 100% load as expected. Actually it does not give core 2 and 3 any load at all. OK, at this stage I assume Orthos has affinity to core 0 and 1 either. But to my surprise, Orthos.exe has affinities with all the four cores. But it seems Orthos is attempting to use only core 0 and 1 (fromt its interface and purpose). That's weird. I think I need to investigate how it can do that whichout setting affinity.

Now, the next step I did the following thing: I set the two processes of TAT to be affinity with core 2 and 3 respectively and set the affinity of Orthos to be with core 0 and 1. Oh, yeah~~! This is the first time I see my CPU at 100% fully load!

Then, I planned my next experiment. I now set the affinity of Orthos.exe only for core 0 and leave core 1 with no heavy-duty tasks. Now I started Firefox. It is kinda ok, still response but not prompt, got slagged a lot when refresh a long page. What happened? As you start a program, if your program doesn't respecify, Windows would set the default affinity to all cores. And the scheduler is not that smart: it won't skip the heavy load cores. Your program would somehow hit those cores a little bit.

The last step is to set the affinity of Firefox to core 1. Now it runs almost as if it has the whole computer unless it needs to access the disk (That's actually true. My 2G memory is not sufficient and need to swap anyway).

This is really not good. So, I'm planning to write a tool that can do this kind of manual "scheduling" by providing user an easy interface to define rules. Then application affinities are dynamically assigned by the program to achieve better CPU usage, better interaction, etc... Any information or idea about how to do this would be welcomed. And if you want to join me, please contact me and I would be happy to have some companions to work together;-)

Wednesday, September 12, 2007

Install and configure CVS server on Ubuntu

We have setup two ubuntu machines at home, probably there will be another one on virtual machine. So, last night I decide to configure a CVS server for our hobby development purpose. Compared with CodeRight or Perforce, CVS is just a toy. But it is a good toy, free and sufficient for small development environment.

To install CVS on Ubuntu, the first thing is to install CVS (I said nothing!):

sudo apt-get install cvs

This will install the cvs pacakge and all the dependencies to your machine. You should do this step on each machine that you want either be the CVS server or works as a client.


Next, on the machine that will be the CVS server, you should create a directory and as the root directory for your CVS repository. The name and location of the directory could be arbitrary:

sudo mkdir /usr/local/cvsroot

Now, the next step is the set the access rights to the repository directory:

1. Create a group called cvs_user and add the users that need to access CVS to this group. You can do this by going to the menu item system/preference.

sudo chown -R :cvs_user /usr/local/cvsroot

This allows every one to access it.

Next, you need to initialize your cvs repository:

sudo cvs -d localhost:/usr/local/cvsroot init

The next step is to create CVS username and password, run the following command for each user you want to add to CVS:

sudo cvs -d localhost:/usr/local/cvsroot +[username]

This will prompt you for the password.

You may not want to type the long repository location "localhost:/usr/local/cvsroot" each time, you can avoid this by defining CVSROOT environment variable:

edit the shell configure batch file:

sudo vim ~/.bashrc

and add the following line:

export CVSROOT=localhost:/usr/local/cvsroot

on the CVS server, or

export CVSROOT=[server name or IP]:/usr/local/cvsroot

on all the other machines.

Ok. now you may want to try if the CVS server works. On any of your machine, create a test directoy, i.e. ~/test and put some files inside it. And then set current working directory to ~ and try the following command:

cvs import test test_vender test_start

which will import your local ~/test directory and create a branch with vendor label test_vender and release label test_start on it. You will be asked for your password first which will be the password you set for CVS earlier. Then you will be brought onto a text editor. CVS requires you write "something" to describe your submission, just write something like "Hey! This is my first CVS check-in!" and then save and exit. If you see CVS successfully imported your local directory, you are a nut! Now delete the local copy by:

rm -rf ~/test

and then checkout from the CVS repository by:

cvs co -r test_start test

Do you see your ~/test folder back? If so, conguratulations, you have succeeded!

You may not want to log into the CVS server every time by typing the password, so you can take advantage of the SSH public key mechanism.

Runt the following command:

sudo apt-get install ssh-keygen

If you have not installed ssh-keygen, this will install it. Next, run:

ssh-keygen

Press ENTER all the way and let it finish, and it will create two files in your ~/.ssh/ folder

cd ~/.ssh
scp id_rsa.pub [username]@[server name or IP]:~/

this will copy your public key to the CVS server. Then log into the CVS server and do the following:

cat ~/id_rsa.pub >> ~/.ssh/authorized_keys
rm ~/id_rsa.pub

This will add the public key to the authorized server list of the CVS server, so next time when your CVS client wants to get into the server, the server will trust it and let it go. You may want to find out additional information about ssh and ssh-keygen.

Now go back to the client machine and try a CVS command again. See, you are not asked for any password! Repeat the above steps for each client and you are all set.

Please let me know if there's anything unclear or wrong and I will further polish this instruction.

Tuesday, September 11, 2007

Cheap Optical SPDIF support for Creative Audigy Sound Cards

Recently I like to make mistakes. Many mistakes, one after another. One of the mistake is, when I start to assemble my new desktop, I chose the Asus P5K. I'm not saying Asus is not good or P5K is not good, I mean, the mistake is, I should have ordered the P5KC, which is only $15 more.

There are two major difference between P5K and P5KC. First, P5KC support DDR3, which has two DDR3 slots in addition to the standard 4 DDR2 SIMM Slot. The second difference is, P5KC has optical SPDIF output for the on-board sound card while P5K only have a coxical SPDIF output.

My new Receiver, the Samsung HT-X40 HomeTheatre has only optical digital input, none of coxical. It seems I cannot connect my computer to the receiver digitally.

Everything has bad and good sides, ..., even a mistake. I have an old Creative Audigy 2 sound card, which compared with modern onboard sound card, still has far better sound quality, more functionalities and less CPU consumption. The Audigy 2 standard version does not have onboard optical connection either but I'm pretty sure Creative has digital extension card available. To my surprise, I can't find the extension card on North America Market! Not to mention how expensive this kind of OEM accessories could cost.

So, I'm hunting around for an alternative solution and here comes what I found:

This is a third-party digital extension card, designed to use with a wideline of Creative products: Audigy, Audigy 2, X-FI, etc. It has optical input, optical output, coxical input, coxical output! In one word, this is a all-in-one solution. It can be directly connected to the Audigy 2 card with a common 40pin IDE cable. This card can be found here and it costs only RMB 70 Yuan, which is about 9 dollars (including shipping) The quality of this card is excellent, and the price is unbeatable! Who said China can only make cheap products, who said China cannot make creative products? Chinese factories are more creative than everything think!

Well, unfortunately it is now only available in China; fortunately I happened to find someone travel from China so I got this card from China in three days with a total cost of only $9!

If you need it and have a Chinese friend in China, you can let him/her to mail it to you. It is very light and the mail cost won't be high. I'm wondering how many people may need this little piece. If there is a market, I can probably get a whole bunch from China and list them on ebay. So, let me know if you are interested but has no way to get it. (BTW, the link I gave is in Chinese, it is a good time for you to find a Chinese friend or try online translation:-P)

Monday, September 10, 2007

Do not use Forceware 16x family drivers if you want to connect HDTV on Vista

I got serious trouble when trying to connect my new Quard Core machine to my new 1080p HDTV.

I recently bought a Samsung 40inch 1080p Full HDTV and also built a new desktop with Intel Quad Core Q6600 CPU + nVidia Geforce 8600GTS. Meanwhile I also have an Acer 19inch LCD connected as dual display.

The dual head output with 1920x1080 to HDTV and 1280x1024 to LCD monitor simultaneously works fine on Windows XP. But when I installed Vista Business version, I can't get it work correctly.

I downloaded and installed the WHQL ForceWare driver version 162.22 from nVidia's website. To my surprise. When I enable Clone mode in the nvidia control panel, the maximum resolution for both output becomes 1280x1024, which is somehow acceptable though not reasonable, because I remember I used to be able to set different resolution in even clone mode.
While I set the display to desktop extend mode, it allows me to set the HDTV to 1920x1080. But here comes the problem: now it suffers serious overscan. Tthe actual display area on the TV screen is much smaller than the the screen size and you can't let the TV to compensate this.

There is one way to compensate this, but I don't think that is a good idea: compensation means down-sampling thus blurring. I did some investigating online and found some others suffering the same problem. Someone pointed out that the 16x series driver is the killer. So I switched back to an 158.xx driver and it works as it should be!

PS: Seems both nVidia and AMD(ATI) still have long way to go to offer superb solution for Vista. Why you guys don't work hard and fast? Vista is a good way to attract non-gamers to buy expensive new video cards, but you want to earn the money, you have to provide good product. Hey, Haitao! I'm saying to you:-D

Resume your work after closing putty

I was having this headache for a while:

I'm putty onto my develop machine everyday. Pretty often I need to run some time-consuming tasks, i.e. making a complete build of the whole project, which takes really long time (for my current project, it takes bout one to two hours).

So, when I need to rush to a meeting or go home when the task is still running, I cannot simply close the lid and put the laptop into standby mode because this will close the internet connection and thus disconnect putty from the remote host. The session created by putty would thus just terminated, oops....my task got terminated easily.

Then my wife found me the solution: screen

screen is a Linux tool that allows you to run multiple virtual terminals within a single terminal. To make my task running even I close the putty. I only use some simply options of screen.

1. Before start the time-consuming job, or better to be right after you log into your remote host, run >screen

2. Run your jobs or doing whatever you want to do.

3. When you want to leave or close your putty, simply close it, your session will still running.

4. When you want to start to work again. open another putty and type:
> screen -r

If there's only one screen session detached, it will be bring back automatically. Otherwise, this will gives you a list of virtual terminals that screen maintains. Type >screen -r [pid.tty.host] to get back the session that you were working in.

That's it!

When screen is controlling your session, you may not scroll back the terminal buffer as screen is recording them. To scroll back, type ctrl-a + ESC and then use up- and down- arrow keys to browse up and down. You would notice that screen actually provides you a much bigger buffer! Yeah~~