Not really a big deal, but I decided to make a quick jump back to WordPress from Ghost. I really don’t have a big reason other than I tried standing up a brand new site for a friend and had quite a few more issues than I’d like. I don’t want to babysit my website that I forget for 2 years at a time.
Woke up this morning with zero agenda. No pressing tasks, no urgent errands—just a blank slate and the sweet freedom to figure things out as I went. And what did I figure out? Today was the day to replace the NVMe in my trusty Lenovo X1 Carbon (2018 model). This laptop’s been a total workhorse for me, still rocking an i7 and 16GB of RAM, so it felt like the perfect candidate for a little weekend refresh.
Now, for the big decision: the operating system. I’ve been really digging my recent server-side work with Alpine Linux, and its minimalist vibe is fantastic for what it is. But as a daily driver? Eh, not quite. It feels like almost everything requires a bit of a workaround compared to a more traditional Linux setup. Don’t get me wrong, Alpine’s got a unique set of goals, and I still love it for its niche, but for my everyday grind, I decided to return to an old friend: Ubuntu.
Normally, even with Ubuntu (which I’ve got plenty of server-side experience with), I stick to the LTS (Long Term Support) releases. But today felt different. I wanted to see what the bleeding edge had to offer, so I threw caution to the wind and went with Ubuntu 25.04. Go big or go home, right?
As expected, the installation was pretty much a breeze. A few quick questions, and the installer hummed along, doing its thing. Once that was done, I dove straight into getting my daily tools set up. It’s funny how that list keeps shrinking, mostly because so much of what I do is browser-based these days. But here’s the essential lineup:
Google Chrome (Had to snag the .deb file directly from Google’s site for this one—a minor hiccup!)
Joplin
Owncloud Client Sync
Draw.io
PowerShell
Signal
Pretty straightforward, huh?
So far, the graphics are behaving perfectly, and YouTube streams without a hitch. I’m no gamer, but I did a quick test just to see how it handled. And speaking of tests, Wine is a new “thing” for me; I’ve never actually needed it before. But seriously, the installer just did its magic without any input from me. Big win!
Alright, now for the real test: actually using it day-to-day. I’ll report back with how it holds up! Catch ya later!
In the world of IT, we’re constantly striving to enhance an organization’s security posture, resolve email deliverability woes, or simply bring their infrastructure up to par. A common task in this realm is the need to swiftly assess DNS configurations, especially during events like an acquisition where dozens—or even hundreds—of domains need to be scrutinized for their current settings.
While tools for this purpose undoubtedly exist, the unique challenges of each audit often necessitate a more tailored approach. Recently, during an acquisition involving over 50 domains, I found myself needing a more efficient way to gather critical DNS record information. This led to the development of two PowerShell scripts, designed to automate and simplify this often-tedious process. Full disclosure: These scripts were developed with significant assistance from Gemini, an AI.
Script 1: Comprehensive DNS Record Retrieval
This primary script is designed to handle the bulk of your DNS record discovery. It comes pre-loaded with a comprehensive list of common DKIM selectors, which should cover a wide range of scenarios. Should you encounter a less common selector, the script is easily modifiable to incorporate new findings.
Detailed documentation and additional usage instructions are embedded directly within the script.
Script 2: Targeted DKIM Selector Discovery
Occasionally, you’ll encounter domains using unconventional or obscure DKIM selector names. This secondary script is specifically designed to help identify these “needle in a haystack” selectors that might be missed by a more general scan.
Similar to the first script, comprehensive documentation is included within the script itself.
A Practical Workflow
My current workflow involves running Get-DNSRecords.ps1 first. If a DKIM selector isn’t found, I then use Get-DNSSelector.ps1 to identify the elusive selector. Once found, I integrate that new selector into the Get-DNSRecords.ps1 script for future, more comprehensive scans. This iterative process has proven effective across hundreds of domain checks.
Technical Note: These scripts have been tested on PowerShell 7.5 running on both Windows 10 and Windows 11 with consistent results.
Future Enhancements
Looking ahead, I plan to explore transforming these scripts into a web application, making DNS record auditing even more accessible and user-friendly.
I welcome any feedback or suggestions on these scripts and the workflow. Your insights help refine and improve these tools for the benefit of the community.
I’ve been on a homelab kick as of late and get annoyed by the constant barrage of browsers warning me that the site I’m going to isn’t safe. There’s a few ways to skin this potato but I’m going to go with a self signed wildcard certificate signed by a self signed certificate authority. I’ll apply the cert to the servers/services or to my Nginx Proxy Manager to handle the certificate side of things and add the CA to the Trusted Root Certificate Authority repository on my computer(s).
I’m running all of these commands on a stock Alpine Linux VM with curl and bash installed, however you’ll be able to do this on most linux distro’s. Windows 10 information below as well.
NOTE: This creates a certificate for homelab.local and *.homelab.local. Feel free to change to a domain that reflects your own setup/needs.
Create the CA Certificate
First the Key to sign the CA with:
openssl genrsa -des3 -out homelabCA.key 4096
When issuing this command, you’ll need to enter a pass phrase. I used Bitwarden to generate a 32 character one, but you can do as you wish.
Create a CA Certificate with the newly created CA Key.
Make the SelfSignedCA Trusted by your browsers/computers
Import homelabCA.crt and homelabCA.key into your Trusted Root Certificate Authorities repository and use wildcard.crt and wildcard.key for your servers/services and/or proxy.
As long as you have your DNS up to snuff, you should be able to navigate to your apps with https://appname.homelab.local and not be annoyed with yet another warning.
tl;dr, here’s a lazy script to do this for you, you just need to enter your pass phrase and answer your normal certificate questions then move the certificates/keys to their respective places (proxy, app/service, Trusted Root CA, etc.)
I won’t write up the whole thing, as it’s largely the same, however I did have to install OpenSSL vis the winget cmdlet and set the PATH before it worked. Here’s the script (save as a .ps1):
I 100% struggled with this for a bit now. It’s been keeping me from deploying apps with HA or scale in mind. Today, I finally figured it out!
Here’s what I’m working with.
Homelab Traffic Diagram Simplified
From the generic image above, I’m using Cloudflare to Proxy my DNS and terminate SSL, then forwarding traffic to my homelab firewall (OPNsense), then forwarding that traffic to a Nginx Proxy Manager VM that has a wildcard certificate for my domain running on my Proxmox cluster. Previously I’ve just used single host in NPM for the proxy setup simply because I couldn’t figure out the proper way to do this. Now I have Docker Swarm setup and am slowly migrating services over to it. The first one up, is 13ft, a simple yet effective paywall bypass app from https://github.com/wasi-master/13ft.
NPM Configuration
Log into your NPM instance and go to Proxy Hosts. Click Add Proxy Host and fill it out like below:
NPM Proxy Host Configuration
Change up your domain name and port number as you see fit, but use “backend” as the Forward Hostname/IP. Click over to the SSL tab and select your SSL Certificate of choice, then finally to the advanced tab and fill out your “Custom Nginx Configuration” to look like what I have below:
NPM Proxy Host Advanced
Click Save and SSH to your NPM instance and do the following:
Then, you’ll need to modify the newly created file with the following:
upstream backend { server 192.168.254.6:5000; server 192.168.254.7:5000; server 192.168.254.8:5000;}
Save the file and start testing. I was able to use tcpdump to verify it was load balancing as expected. This specific configuration without additional definition uses “round robin” as the load balancing method.
If you want to see the traffic with tcpudmp, use the following from your ssh session:
tcpdump port 5000
You’ll get output similar to this:
10:28:22.430847 IP npm.homelabdomain.tld.54454 > 192.168.254.8.5000: Flags [.], ack 176, win 501, options [nop,nop,TS val 3914647141 ecr 202375768], length 010:28:22.430888 IP npm.homelabdomain.tld.54454 > 192.168.254.8.5000: Flags [F.], seq 577, ack 176, win 501, options [nop,nop,TS val 3914647141 ecr 202375768], length 010:28:22.431403 IP 192.168.254.8.5000 > npm.homelabdomain.tld.54454: Flags [F.], seq 176, ack 578, win 498, options [nop,nop,TS val 202375769 ecr 3914647141], length 010:28:22.431407 IP npm.homelabdomain.tld.54454 > 192.168.254.8.5000: Flags [.], ack 177, win 501, options [nop,nop,TS val 3914647141 ecr 202375769], length 010:33:22.330079 IP npm.homelabdomain.tld.35660 > 192.168.254.7.5000: Flags [S], seq 180177399, win 64240, options [mss 1460,sackOK,TS val 33715079 ecr 0,nop,wscale 7], length 010:33:22.331293 IP 192.168.254.7.5000 > npm.homelabdomain.tld.35660: Flags [S.], seq 799044855, ack 180177400, win 64308, options [mss 1410,sackOK,TS val 523646323 ecr 33715079,nop,wscale 7], length 010:33:22.331302 IP npm.homelabdomain.tld.35660 > 192.168.254.7.5000: Flags [.], ack 1, win 502, options [nop,nop,TS val 33715080 ecr 523646323], length 010:33:22.331347 IP npm.homelabdomain.tld.35660 > 192.168.254.7.5000: Flags [P.], seq 1:578, ack 1, win 502, options [nop,nop,TS val 33715080 ecr 523646323], length 57710:33:22.332055 IP 192.168.254.7.5000 > npm.homelabdomain.tld.35660: Flags [.], ack 578, win 498, options [nop,nop,TS val 523646324 ecr 33715080], length 010:33:22.333427 IP 192.168.254.7.5000 > npm.homelabdomain.tld.35660: Flags [P.], seq 1:176, ack 578, win 498, options [nop,nop,TS val 523646325 ecr 33715080], length 17510:33:22.333432 IP npm.homelabdomain.tld.35660 > 192.168.254.7.5000: Flags [.], ack 176, win 501, options [nop,nop,TS val 33715082 ecr 523646325], length 010:33:22.333529 IP npm.homelabdomain.tld.35660 > 192.168.254.7.5000: Flags [F.], seq 578, ack 176, win 501, options [nop,nop,TS val 33715083 ecr 523646325], length 010:33:22.334460 IP 192.168.254.7.5000 > npm.homelabdomain.tld.35660: Flags [F.], seq 176, ack 579, win 498, options [nop,nop,TS val 523646326 ecr 33715083], length 010:33:22.334467 IP npm.homelabdomain.tld.35660 > 192.168.254.7.5000: Flags [.], ack 177, win 501, options [nop,nop,TS val 33715083 ecr 523646326], length 010:38:22.506149 IP npm.homelabdomain.tld.41766 > 192.168.254.6.5000: Flags [S], seq 3866721961, win 64240, options [mss 1460,sackOK,TS val 992168023 ecr 0,nop,wscale 7], length 010:38:22.507256 IP 192.168.254.6.5000 > npm.homelabdomain.tld.41766: Flags [S.], seq 1024843093, ack 3866721962, win 64308, options [mss 1410,sackOK,TS val 2515934452 ecr 992168023,nop,wscale 7], length 010:38:22.507265 IP npm.homelabdomain.tld.41766 > 192.168.254.6.5000: Flags [.], ack 1, win 502, options [nop,nop,TS val 992168024 ecr 2515934452], length 010:38:22.507311 IP npm.homelabdomain.tld.41766 > 192.168.254.6.5000: Flags [P.], seq 1:578, ack 1, win 502, options [nop,nop,TS val 992168024 ecr 2515934452], length 57710:38:22.508198 IP 192.168.254.6.5000 > npm.homelabdomain.tld.41766: Flags [.], ack 578, win 498, options [nop,nop,TS val 2515934453 ecr 992168024], length 010:38:22.509597 IP 192.168.254.6.5000 > npm.homelabdomain.tld.41766: Flags [P.], seq 1:176, ack 578, win 498, options [nop,nop,TS val 2515934455 ecr 992168024], length 17510:38:22.509603 IP npm.homelabdomain.tld.41766 > 192.168.254.6.5000: Flags [.], ack 176, win 501, options [nop,nop,TS val 992168027 ecr 2515934455], length 010:38:22.509650 IP npm.homelabdomain.tld.41766 > 192.168.254.6.5000: Flags [F.], seq 578, ack 176, win 501, options [nop,nop,TS val 992168027 ecr 2515934455], length 010:38:22.510594 IP 192.168.254.6.5000 > npm.homelabdomain.tld.41766: Flags [F.], seq 176, ack 579, win 498, options [nop,nop,TS val 2515934456 ecr 992168027], length 010:38:22.510600 IP npm.homelabdomain.tld.41766 > 192.168.254.6.5000: Flags [.], ack 177, win 501, options [nop,nop,TS val 992168028 ecr 2515934456], length 0
You’ll notice all 3 IP’s listed in the “backend” configuration are getting traffic.
Alpine linux, from their website, is a security-oriented, lightweight Linux distribution based on musl libc and busybox. I don’t 100% know what that means, so here’s their site 😄
To kick this off, we are going to install Apline on Proxmox, do some basic configurations, install the Alpine Configuration Framework, install and configure WordPress with lighttpd, mariadb and php, then tie it off with a simple Samba share.
Install Alpine Linux on Proxmox
You can deploy with the resources you wish, however, for this I’ll be using 32Gb Disk, 1vCPU, 2Gb RAM.
Once the VM is booted, you’ll be asked to login. Typically just type root and hit enter. You should be logged in.
Now you can run the setup process. Run the command:
setup-alpine
From that script, you’ll be asked a series of questions. I’d like to eventually develop an answerfile for this as most of my stuff is the same across the board. Here are the answers I used:
keymap: us ushostname: alpinevminterfaces: eth0networking: dhcproot password: 'somethingstrong'timezone: America/Chicagofqdn: alpinevm.domain.tldproxy: nonerepos: c ruser: nossh: opensshdisk: sdainstall mode: sys
Once the setup finishes, it’ll ask you to reboot. Since I went with the sys install mode, I’ll remove the ISO from the VM before.
Get logged back into the VM through Proxmox console.
For this example only, I’m allowing password login via root on the VM by modifying /etc/ssh/sshd_config.
AllowRootLogin yes
At this point you can now SSH to the VM to work with it. It’s a functioning Alpine OS VM with no services at this time.
Alpine Configuration Framework
I’m not a linux guru by any stretch of the imagination, so I’ve relied on other tools to help me at least visualize or read data from linux systems in the past to aid in configurations. Many times that was Webmin/Virtualmin. You can run Webmin on Alpine but it’s a bit of a hack and I don’t recomend it. Aline ships with a WebUI of sorts that may help you to get started. That’s called the Alpine Configuration Framework (AFK).
To install ACF, simply run the command:
setup-acf
By default the mini_httpd server runs on port 443, so I’ll update that to run on port 10443. Modify /etc/mini_httpd/mini_httpd.conf to to show port 10443 instead of 443 and restart the mini_httpd service:
service mini_httpd restart
Now open a browser to https://<yourvmip>:10443 and login with root.
WordPress
WordPress, while being a huge player in the Web CMS game, is also super easy to use and a great starting point for your new website/blog. To deploy WordPress, we will install lighttpd, mariadb, php, and more to get it all up and running.
Now let’s create some usernames (create the same user/pass 2x) with the following:
adduser usernamesmbpasswd -a username
Finish by starting samba and setting it up to start at boot:
# rc-update add samba# rc-service samba start
All done. Now you can navigate to \ipaddress\data to access your share with the user/pass you created. Possibly bad advice: create the same user/pass combo your workstation is using to access the samba share (i.e. usernamexyz/passwordabc on Windows 10 and usernamexyz/passwordabc on Alpine/Samba).
I’ve tried writing this post a dozen times now and I think I’ll go with the most simplistic route of just giving a high level overview. You can ask questions or Google your heart out to find the missing pieces. The purpose of this document is to inspire you to build a homelab, not a full step-by-step so here we go….
Proxmox is a free, enterprise grade, open-source virtualization platform that allows you to run multiple virtual machines on a single host machine. This is a great way to experiment with different operating systems and software without having to dedicate a physical machine to each one. Docker is a containerization platform that allows you to package an application with all of its dependencies into a self-contained unit. This makes it easy to deploy and run applications consistently across different environments. OpenMediaVault is a network-attached storage (NAS) solution that allows you to create a centralized storage pool for your data.
By combining these three technologies, you can create a powerful and versatile homelab that can be used for a variety of purposes. For example, you could use Proxmox to run a virtual machine for your Ghost blog, a virtual machine for a web server, and a virtual machine for a media server like Plex. You could then use Docker to deploy containers for additional services, such as a database or a development environment. Finally, you could use OpenMediaVault to create a centralized storage pool for your data with NFS or SMB, such as your blog posts, media files, and backups.
I am building this on a 12 year old Dell Optiplex 7010 with an i7 CPU, 12 Gb RAM, and a 240Gb SSD. It’s not much, but it’ll more than get the job done and get you on your way to building a new homelab or expanding your current one.
We will start off easy, but also be fully functional by the time this is done. On top of Docker we will run 3 service VM’s. Nginx Proxy Manager, Ghost CMS and Watchtower. Ghost is the server we will present to the world. Nginx Proxy Manager will handle SSL offload and some security fundementals. Watchtower will keep the Docker containers updated. Docker will be managed via Portainer giving a very well polished and extensable Web UI for building your homelab. Essentially, this is what we are building:
Here’s a starting point for IP’s and resource allocation.
hostname
ip/url
cpu
ram
disk1
disk2
pm1
https://192.168.254.5:8006
8 Cores
12Gb
240Gb
ovm1
https://192.168.254.6
2vCPU
2048Gb
32Gb
50Gb
docker1
https://192.168.254.7:9443
2vCPU
4096Gb
16Gb
Download and Install Proxmox on your hardware. Use the information presented in this post to answer as many questions as you can. This has been done a thousand times over and well documented on the internet. Proxmox’s wiki is a great reference.
Once you have Proxmox installed, login via the browser to the IP you set the server to. In the example here https://192.168.254.5:8006.
OpenMediaVault
Download the ISO for OpenMediaVault to your computer and upload that ISO to “local” storage on Proxmox. Use the information above as a reference. From there we install OVM via Proxmox’s UI. In the example here we create a VM with the OVM ISO using mostly defaults with 2vCPU Cores, 2Gb RAM and a single 32Gb Disk. You can create the 2nd 50Gb Disk for NFS here as well. Then we login to OVM’s UI, do updates, change any settings we need to (timezone, password, IP, etc.). Then we provision the 2nd disk as an ext4 filesystem, create a /docker shared folder and finally expose that shared folder via NFS.
Create a new VM via the Proxmox UI with the following: 2vCPU Cores, 2Gb RAM, 2 Disks (1st a 32Gb for the OS and a 2nd 50Gb for NFS)
Login to the OVM UI and make your settings changes, run updates and do a final reboot before starting the rest.
Create an ext4 filesystem on the 50Gb disk.
Create a /docker shared folder
Expose the /docker shared folder via NFS with the following: client – 192.168.254.0/24, permission – read/write, extra options – subtree_check,insecure,no_root_squash. Click Save, then Apply.
Ubuntu 24.04 LXC w/ Docker
For this step, we’ll be using Proxmox VE Helper-Scripts to install an Ubuntu 24.04 LTS LXC (linux container) that installs Docker and Portainer for us. There’s a few modifications we need make in addition to deploying the LXC like modifying the LXC conf file on the Proxmox Host, permissions related settings on the LXC guest, and then the NFS client setup. After that we’ll be able to deploy some Docker containers and start having fun.
From your computer go to https://community-scripts.github.io/ProxmoxVE/scripts?id=docker and copy the .sh script URL to your clipboard and paste it into the Proxmox Console (I recommend opening a new console or using SSH on the Proxmox host) and paste the .sh script URL. This will kick off an installer script prompt that you’ll need to answer.
I rarely go with defaults for this for some reason so here’s what I do:
Select Advanced Then…
Container: Priviliged
Hostname: docker1
Disk size: 8Gb
CPU: 2
RAM: 2
Network: vmbr0
IP Address: 192.168.254.6/24 #make this your IP
Gateway: 192.168.254.254 #make this your default GW
Disable IPv6: yes
DNS Search Domain: blank
DNS Server IP: 1.1.1.1 #or whatever you use
VLAN: blank
Root SSH: yes
Verbose Mode: no
After your LXC Container is deployed, use the Proxmox UI to shut it down. Take note of the ID Number of the LXC Container you crated. Then from the Proxmox shell you’ll need to add a line to that container’s config file with this command: echo -e "lxc.apparmor.profile = unconfined" >> /etc/pve/lxc/103.conf #the 103 needs to be replaced with your ID Number of the LXC you created.
Now under that LXC in the Proxmox UI, you’ll want to go to Options –> Features and check NFS, Fuse and ensure Nesting is also checked. Save and start the CT.
Login to the Docker server with SSH
Create a new user with useradd dockeradmin
Set the password with passwd dockeradmin
Add that user to the /etc/sudoers file with the following syntax:
Reboot the Docker server, log back in via ssh and run the command:
mount | grep nfs
If you see the mount, you are good to go onto the Portainer step.
Portainer
Portainer was installed during the LXC script process (or should have been). It’s pretty easy to install if you missed that step. In Portainer, I like to use “Stacks” as it helps me keep track of the docker compose elements I run as well as modify later without the feeling of starting from scratch…or doing everything from VI/CLI.
Let’s deploy our 3 stacks. Below will be 3 already modified compose.yml files that you’ll copy into Portainer’s Web Config under Stacks. Modify the particulars if you know what your doing. You’ll notice the volumes use the previously created /opt/docker directory that is NFS mounted to OMV.
services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port environment: DB_SQLITE_FILE: "/data/database.sqlite" DISABLE_IPV6: 'true' INITIAL_ADMIN_EMAIL: [email protected] INITIAL_ADMIN_PASSWORD: reallySTRONGpassword volumes: - /opt/docker/npm/data:/data - /opt/docker/npm/letsencrypt:/etc/letsencrypt
Nginx Proxy Manager Docker Compose
services: ghost: image: ghost:5-alpine restart: always ports: - 2368:2368 environment: database__client: mysql database__connection__host: db database__connection__user: root database__connection__password: REALLYstrongPASSWORD database__connection__database: ghost url: https://fqdn.publicsite.tld #UPDATE THIS WITH YOUR HOST.DOMAIN.TLD volumes: - /opt/docker/ghost/content:/var/lib/ghost/content db: image: mysql:8.0 restart: always environment: MYSQL_ROOT_PASSWORD: REALLYstrongPASSWORD volumes: - /opt/docker/ghost/db:/var/lib/mysqlvolumes: ghost: db:
Ghost w/ MariaDB Docker Compose
services: watchtower: image: containrrr/watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock command: --interval 300 # Check for updates every 5 minutes restart: always
This isn’t really a hack but rather a simple trick to keep your system management tasks as near to each other as possible. The trick I am referring to a the moment is using Putty on Windows. Putty is a nifty and free SSH, Telnet, Rlogin, RAW and Console shell program that allows you to do the aforementioned management tasks from a single application on Windows. Putty is available here.
Now for the easiest trick in the world. Download Putty form that website, rename putty.exe to ssh.exe and place it in the C:WindowsSystem32 directory. This will allow you to launch putty from the command line just like telnet as well as add some other neat things.
From the command line (run or CMD)
ssh -telnet 192.168.1.1 ssh 192.168.1.254
Putty can also just be launched for an interactive setup with “ssh” from the command line. That will launch the newly renamed executable for you to change settings on and connect to the items that you would like to administer. Putty also allows you to tunnel traffic through SSH with localhost connections. I’ll try to make a fun sheet on that as well. One good tutorial I read was how to setup a SOCKS proxy through and SSH tunnel.
Lately with my pfSense firewall project I’ve been pretty busy with the configuration but now that I’m slowing down a little and finishing up the last bits, I can concentrate on a very important part of any firewall or server for that matter. I needed a way to test the amount of data (throughput) that the link outside of my firewall could handle and also test the processor and disk usages when at load (performance). I accomplished this by using TTCP, a utility that allows you to send and receive multiple threads of TCP data. At the end of the test, which usually takes about a minute and a half, you get a display of how long the test took, what your buffer size was (that can be modified), and what your total throughput was. The binaries for Windows and other OS’ can be downloaded from here. You will need to have this running two or more computers to get any kind of results. The “receiver” is ran accordingly:
pcattcp.exe -r
The transmitter, another computer on a remote segment of the network can be run with:
pcattcp.exe -t 10.0.0.20
The software will then work it’s magic and give you the report at the end of the test. To test from multiple locations, you can launch multiple sessions one right after another on the receiver side and have multiple computers be transmitters to that single receiver.
The diagram below illustrates what I am speaking to accomplish with this.
Alternatives to TTCP would be iperf and qcheck as well as a whole lot more.
I was surfing the net tonight like I normally do at night and found a very good video on why to use Google docs. I use the online colaboration software as a place to centrally store my most redilly used and edited files to include a todo list for work, my home projects, my resume and my monthly bills. I share my home projects and monthly bills documents with my wife so that we can collaborate on a subject and prevent the email attachment tag game. Here’s the video:
He puts it into words and pictures much better than I can. I like the document icons. 🙂