No Touchpad on Dell Inspirion on Centos 7

I was not able to get the touchpad working using any of the typical solutions. What I found was disabling i2c_hid fixed the issue, but doing the actual disabling didn’t work. Adding an option in /etc/modprobe.d/blacklist.conf did not work. The real trick for this system was to add a kernel boot line in /etc/default/grub like so:

modprobe.blacklist=i2c_hid

Add that do the this line as one of the option (there are probably several options in there for Centos7):

GRUB_CMDLINE_LINUX="modprobe.blacklist=i2c_hid rhgb quiet"

Then , because this is EFI system you need to update grub using this:

grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg

Then after a reboot the touchpad finally worked in Gnome.

PS. I found out that disabling actually works because I could run “rmmod i2c_hid” and it was removed from the “lsmod” listing.  Immediately the touchpad started working.  I also had one of the grub options set, which did not seem to make any difference: i8042.nopnp.

 

Posted in CentOS, Linux

MySQL 5.6 upgrade tips for Ubuntu 12.04 Precise

(Wow, 1 year to the day from last post. How funny. Been too busy this last year to write up anything!)

These tips are only based on issues I’ve run into on my development server.  Currently we have a MySQL 5.5.x Ubuntu standard package running in production on Ubuntu 12.04.  That’s getting pretty old!  And right now I’m trying to solve some issues in our apps and wanted some extra features, plus I wanted to start testing 5.6+.  I originally tried MariaDB, but this was a complete fail and broke everything.  I basically had to purge all mysql* and mariadb* packages with apt-get just to get it to run, but then nothing worked.  Rather than fight with that, for now I decided to just upgrade Mysql.  Eventually, we’ll be moving to Centos 7 and Mariadb, but we’ve got a lot of backend scripts, ODBC, and other apps that need to be tested.

Currently I have 5.6.29 installed and it works.  I can’t get my PHP stuff to work right yet, but I think its because of the custom “datadir” directive I have set.  We run our installation in /data/mysql  (right or wrong, that’s what we’ve done forever, so I need mysql to bend to my will on this one, though this does make things more difficult).

The PPA’s out there don’t seem to support 12.04 Precise anymore, which sucks, but I understand.  I mean, its an LTS, still supported! But, I’m not using the supported version by Ubuntu either, so, my bad.  I didn’t even know this, but Mysql has APT packages and repo’s you can use, so I went with that.  Use their guides, its easy.  Basically you get a .deb package you run with “dpkg -i” and it will ask a few questions about which version you want, etc.

First thing.  I can’t get my datadir to work.  The mysql installer package and startup scripts are ignoring my datadir in /etc/mysql/my.cnf.  Also, tip for anyone out there: Mysql’s installer overwrites the apparmor profile setup for usr.sbin.mysqld without any backup of it, and if you run a custom data directory like we do, you’ll need to reset any apparmor paths with the correct perms.

I  found a post/bug report on this.   https://bugs.mysql.com/bug.php?id=68807
So I proceeded to try this:

mysql_install_db –keep-my-cnf –datadir=/data/mysql –defaults-file=/etc/mysql/my.cnf

That didn’t work either!!

Gave up on that and added the datadir directly into mysqld_safe script. Which I don’t like, but I don’t feel like solving this problem for the time being.

Ok, next, I find out that Mysql still will  not launch.  I ran into an issue where its requiring AIO on the file system. (I have ZFS)  There’s a config option to disable it, but again, I’m finding many of my parameters in my.cnf are being ignored.  (  innodb_use_native_aio=0 )

Time to give up. I’d like to keep working on this, but I have more important things to do. Except, maybe we’ll try MariaDB 10.1 once more.  This time I left my.cnf alone and used their repo configurator for Precise.  After the easy install, it launched, right from my settings in my.cnf and using my datadir!   Yay!   Looks like my web is slightly faster now too!  Bonus!  And best thing of all, I can see my entire logged query that was failing, which was truncated in 5.5.  And even better, in minutes I had my app issue fixed.

All seems great, but I do have one problem, which doesn’t need to be address yet.  Looks like something in the old ODBC drivers is broken because our old Cobol app isn’t segfaulting now.  (I probably purged all those links and files I had during the process above)

In the end, I guess I didn’t have much in the way of tips for this upgrade.  Actually, my ultimate goal is to move everything to MariaDB anyway, and also upgrade from Ubuntu Precise to CentOS 7 some time this year.

 

Posted in Databases, Development, Linux Tagged with: , , ,

CentOS 7 Clone Server from mdadm lvm system

The below info is only super quick notes during a clone I ran recently,
this is not a  complete HOW-TO!!

My preference is to use LVM snapshot backups using fsarchiver.
I typically have a layout like so:

/dev/md0  –  /boot

/dev/md1 – LVM PV

LVM:
/dev/vg1/root – root
/dev/vg1/home
/dev/vg1/tmp
/dev/vg1/swap
(I may split out /root further, but depends on my build needs, usually not anymore)

With the above I use my own script that auto-creates an lvm snapshot,
runs fsarchiver to backup, and dismount/clear the snapshot.  I take
those .fsa files and recover to new system.   I also do a
“read-only” remount on /dev/md0 /boot to fsarchiver directly.
(this could be dangerous, but has never failed me in 8-10 years)

——————-  NOTES/PROCEDURE ——————

Centos 7 clone server


Get FSA clones from orig.

Launch Live Install CD for CentOS7 - into troubleshooting, 
rescue, Skip any mounts and go into shell.


Create Partitions

NOTE: if on >2TB , make it a GPT partition!
Make sure 1M part1 is bios_grub type!! And raid type on MD's



Create MD’s.

*** IMPORTANT ***
** HOMEHOST ISSUES **  make sure there is not mdadm.conf 
with any hostname settings, remove otherwise. Also 
CREATE with a —name directive and label it. 
(this is not the hostname)

mdadm --create /dev/md0 --metadata=1.2 --level=1 
--raid-devices=2 /dev/sda2 /dev/sdb2
## then stop
mdadm —stop /dev/md0
## then reassemble
mdadm --assemble /dev/md0 /dev/sda2 /dev/sdb2 
--update=name --name=boot --homehost="<none>”

(-- if you copy above lines, might be 
messed up wordpress html)
**NOTE** 
if home host issues come up on first boot, when 
stuck in dracut prompt, you can stop and 
reassemble to reset those names and host settings.

**NOTE** 
When all is said and done, make sure the final 
mdadm.conf is set with:
(notice /dev/md/0 not md0!  — and the device/homehost settings!)
********************
DEVICE partitions

HOMEHOST <ignore>

AUTO +1.x +imsm -all

ARRAY /dev/md/0 metadata=1.2 name=boot 
UUID=65cc434a:33516f02:17909f3c:74005756
ARRAY /dev/md/1 metadata=1.2 name=rootlvm 
UUID=030cd899:987aec61:e348e3db:35b62748
*********************

#### EDIT 4-24-15 ####
The last clone I just did, DID NOT use /dev/md/0 format,
it used /dev/md0 without error, so try this way too.
However, see notes update below about UUID’s and grub.cfg default command line.
#### END EDIT ####

Create LV’s

Create your PV > VG > LV’s
Format XFS

REstore FSA’s

Do mounts and Chroot env.
— make sure to setup the mdadm.conf correctly in chroot.

— check and fix anything in /etc/fstab
< UUID mounts and maybe /dev/md0 needs set?

— if not already, /etc/dracut.conf.d/
< add md.conf < set the mdconfig and lvmconfig
variables. (review dracut.conf as example) this is
needed before initramfs. (might be set form another before!)

— Do a dracut initramfs build
You’ll need to do
“dracut -f -v /boot/initramfs-.img ”
plus just a “dracut -f -v” for current kern.

— then check any needed in /etc/default/grub
**** IMPORTANT (edit 4-24-15) Change the /etc/default/grub
section for GRUB_CMDLINE_LINUX if you have rd.md.uuid entries!!!!!
See notes below in edit.
— grub2-mkconfig -o /boot/grub2/grub.cfg
— grub2-install –modules=”part_gpt mdraid09 mdraid1x lvm” /dev/sda
(and for other drive to bootable)

— exit chroot and umounts

Should be able to reboot!

Once loaded (fix further if not) then check the mdadm.conf
and lvm.conf and fstab.

**CHECK**
If doing any other ZFS, might need to
reset hostid and hostname!!!!!!
Or else might not reimport on reboot.
nano /etc/hostname
< set the current new hostname
nano /etc/hosts
< set that new hostname in the localhost line

then
“ dd if=/dev/urandom of=/etc/hostid bs=4 count=1 “
<< resets a new hostid

** NETWORK**
Prob just ned to fix/change any ifcfg scripts.

Should be all downhill from here!

#### EDIT NOTES 4-24-15 ####
Again, I ran into issues cloning. Turns out I left entries in /etc/default/grub for the UUID’s of the old system. See this example (not the full line, and should be single line):

GRUB_CMDLINE_LINUX=" rd.auto rd.auto=1 
rd.md.uuid=c2d08888:8e4464ae:3d321966:fd1111cc 
rd.md.uuid=bbd483af:95088699:76e2826e:14b6dc4f rd.lvm.lv=vg1/root

DONT FORGET to change those rd.md.uuid entries or it won’t boot!
Get your current UUID’s like so:
mdadm –detail /dev/md0 (and /dev/md1 too)
The UUID listed in those details is what you need on each rd.md.uuid entry.

Maybe it works without those in there, but grub wouldn’t find the md devices and dracut/initramfs bootup would not assemble my arrays until I changed them. The partitions and arrays were there though, and I was dropped to a dracut prompt. From there I could manually assemble and boot, which allowed me to change the line above and then re-run grub2-mkconfig.

Just quick note while on dracut prompt. Manually assemble:

mdadm --assemble /dev/md0
mdadm --assemble /dev/md1
lvm vgchange -a y vg1
exit

That booted my system. After fixing /etc/default/grub and re-running grub2-mkconfig all was good!

Posted in CentOS, Linux, Networking Tagged with: ,

Raspberry Pi as an Auto-connect SSH-Tunnel RDP Terminal (Works great with Virtual Machines!)

Lately I have been interested to see if the RPi is up to the task of being an RDP terminal, especially when accessing an offsite machine or vm that is accessed via an ssh tunnel.  I tried the Pi on my own internal network to access my various computers through the Remote Desktop Protocol, and have been pleasantly surprised.  Using FreeRDP and the –f (fullscreen flag) other than some strange color issues- you can’t hardly even tell that you are on a RDP connection.

This prompted another question- Could an RPi access a work computer from home – free from the mess of VPNs.  Even more, not everyone is terribly comfortable with Linux, so I wanted a setup that could auto login from a remote location so that when you boot the Pi- it seems as though you are booting into a windows computer.  After some messing around I have it up and running, and it has been awesome!

To create an auto-connect pi RDP terminal you’ll need a pi, and an sd card with Raspbian installed (or noobs) and an understanding of how to access the GUI as well as the terminal (either directly or from the GUI.)

To start:

Run terminal (if you’re in the gui double click LXTerminal) and run the following code:

sudo apt-get update
sudo apt-get upgrade

Note:   this may take a while!

Then run:

sudo raspi-config

Note: this is the interface that automatically starts when you boot Raspbian for the first time.

Make sure to (if you haven’t already)

  • change default pi password
  • change to auto boot to pi user x11 gui
  • set the correct international settings- particularly the keyboard layout

Then select finish and run:

sudo apt-get install -y freerdp zenity

This will install the necessary software to get our connection going. Next create the following file:

nano /home/pi/Desktop/remote.sh

In this file add the following text:

#! /bin/bash
exec `
xfreerdp -f -u username -d domain.tld \
-p $(zenity \
--entry \
--width='380' --height='220' \
--title='Password' \
--hide-text \
--text='Enter Password') \
127.0.0.1:63389
`

Note:   change username to your remote computer username, and domain.tld to the domain (if no domain leave -d domain.tld out)

To explain a few things:

We are calling on FreeRDP to create a connection using the username and password provided.  The Section after the password flag –p will pop-up window asking the user to type in the password.  I this method instead of storing the password in the file.  Though, to truly have it automatic- you can change the file to:

#! /bin/bash
exec `
xfreerdp -f -u username -d domain.tld \
-p Password \
127.0.0.1:63389
`

Of course changing username, domain.tld, and Password to the correct credentials.  This would offer the benefit of being completely automatic, though it’s a pretty big security risk and just isn’t worth it, IMHO. The 63389 is the port that we will be using on the pi in the tunnel to the remote computer on port 3389 (the RDP port.) It doesn’t have to be 63389, it could be anything from 1024 – 65535 that isn’t already used for something else.

Finally, the remote.sh file must be executable so run:

chmod +x /home/pi/Desktop/remote.sh

The next step is to create an rsa key to connect via ssh to your remote server without having to type in a password.  If you can already log in to the remote server without a password, you can skip this section.

Run:

ssh-keygen –t rsa

Note: press enter 3 times

This creates a file called .ssh in your home folder (/home/pi/.ssh) in which will be a public and a private key.  If the remote computer already has a .ssh folder in the same location, this code will append your public key to the server’s authorized_keys file and give you remote access.

Run:

cat /home/pi/.ssh/id_rsa.pub | ssh USERNAME@SERVER 'cat >> .ssh/authorized_keys'

Note: change USERNAME and SERVER to the remote server’s username and address the same way you would to access it through ssh normally.

The next step is to have the small program that you made earlier on the desktop run automatically when the GUI starts on boot.

Run:

sudo nano /etc/xdg/lxsession/LXDE/autostart

Add a line at the end of the file as follows:

@/home/pi/Desktop/remote.sh

Save and exit. Finally you need to have the ssh tunnel automatically connect on boot. To do so, Run:

crontab -e

Add a line at the end of the file as follows:

@reboot ssh username@address.tld -L 63389:127.0.0.1:3389 -N

Save and exit.

Note: the username and address.tld need to be changed once again.  The 63389 needs to be the same as the port listed in the remote.sh file.  The 127.0.0.1 only stays that way if you are logging directly into the remote server, otherwise, you would change it to the name or ip address the computer that you intend to log into.  Remember that the computer on the other end needs to have RDP turned on and have the settings set so that any type of rdp connection is allowed. If anything goes wrong- you can simply reboot the pi and reconnect.  You can also press (Ctrl + Alt + Enter) and close the window and double click remote.sh on the desktop and reconnect that way as well.  (select the first option to execute – Not in terminal)

Have fun!

 

Posted in Uncategorized

VMDK convert and raw IMG mounting on Ubuntu 14.04 Linux

I haven’t tested mounting just the vmdk image, wasn’t sure if that would work, so I just ran the conversion first to raw img file.  I don’t really need to regularly, but I wanted to convert an old vmdk virtual disk to a ZFS file system with NTFS on it for use in KVM. (the vmdk I could use, but it was too small anyway)

First I convert: (cd to your vmdk dir, I am in /vm/sys )

qemu-img convert -f vmdk sourcefile.vmdk -O raw newfile.img

 

Next set the loop dev and find the partitions using

loopdev=$(losetup -s -f newfile.img)
kpartx -av $loopdev

 

On my system it added a new /dev/mapper:

/dev/mapper/loop0p1

 

Make a mountpoint and mount the loopdev as NTFS

mkdir mntimg
mount /dev/mapper/loop0p1 /vm/sys/mntimg

From here, I could easily mount my ZVOL and have it partitioned and formatted and ready for an rsync!

This of course doesn’t help me if I wanted to edit the VMDK disk image and continue to use it.  Later I might test a direct vmdk mount and see what happens, because that would be helpful while fixing non-booting vm’s!

Posted in KVM, Linux, Ubuntu, Virtualization, Windows XP Tagged with: , , ,

Windows 2003 Server 32bit guest on KVM host VirtIO drivers

This was a tough one to figure out, yet, simple to do once I did it right.  I am trying to setup a lab network to do some testing with recovery and migration with older Exchange and Windows servers before we do upgrades to 2012 R2 servers.  I am also needing to test KVM further, because I am seriously considering moving some of our servers to virtual guests, and KVM looks awesome!

I ran into many issues getting Win2k3 to load, mostly because well, its an 03 server!  They are such a pain anymore, and as much as I hate to admit, I kind of like the newer Windows Servers (08 +).  Still prefer Linux though!

Ok to the main point. In order get some reasonable performance in the VM guests, I read that the VirtIO drivers for block devices and network are needed.  After installing them, I agree, much better!  Only problem, they are difficult to install on 03 server!  I find lots of poeple posting similar issues out there, and lots more people posting back “works for me!”.

One thing I needed was to get VirtIO driver loaded on the OS itself, the boot disk.   Loading during the F6 prompt absolutely would not work for me. I could select the driver, it would see it and see the floppy disk image just fine, even using different versions, yet it would not work.

I decided to go with IDE on the OS disk and finish intalling.  This worked fine, its just not very fast.  When install finished, I added a second disk image set to VirtIO type in virt-manager, and rebooted the guest, leaving the original OS on IDE for now. It booted up, detected the SCSI controller was new and attempted to load, but it failed stating “Cannot start this hardware”, which left the yellow exclamation mark in the device manager.

This partially installed driver really broke the OS though! It totally brought the system to a crawl with high cpu usage. Actually the cpu stopped, but it was still at a crawl.  It took minutes just to get it to click on things and open anything.

I was able to boot with F8 and selecting “Last known…” option, which restored my registry and allowed the system to run again.  I tried this with several versions I found online from people mentioning how hard it was to load VirtIO drivers in 03.  08 and newer was a piece of cake!

But then…  I tried once more, adding the current ISO image from Fedora again (I used this first by the way).  Only this time, I selected the \WNet folder for the driver.  Yeah… the “WNet” folder on the ISO… this actually has the SCSI drivers!  Doesn’t make any sense!  “Net” , shouldn’t that be the “network” driver?????   Well, coolness happened, the Red Had SCSI VirtIO Controller installed perfectly, finally!!!!

So next I tried the network, which, if I remember right, it was in the \WLH folder of the ISO. And finally, I added the “balloon” memory driver from teh “XP” folder.  Awesome!  None of that makes any sense!!  If I tried to load or update drivers from any of the other folders, OR allow Windows to find on its own, it wouldn’t work.

So , “yea, that’s the ticket!”  On the VirtIO ISO from Fedora:

\WLH – NIC driver for VirtIO

\WNET – SCSI Controller VirtIO driver

\XP  – Balloon driver (memory management stuff)

 

UPDATE (next day)
I found out why the naming is confusing:

https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers

I guess it has to do with Microsoft legacy naming, like “LH” = Longhorn.

 

Last note…

Oh also, to get the OS disk to run as VirtIO, you have to just switch it in virt-manager. (or use command line, which I haven’t learned yet) You just need to:
A. load the OS on IDE disk first,B. Add a new, temp VirtIO disk, and install the VirtIO correctly in 03 server
C. Shutdown guest, switch original OS disk from IDE to VirtIO
D. Boot up and should be ok!

 

Posted in KVM, Networking, PC Repair, Problems, Virtualization, Windows Server Tagged with: , , ,

Freebsd CrashPlan backup folders outside /compat/linux solved

Well, at least I think I solved this…  there may be a better way, but on my system this worked. I am running PCBSD 9.1 still, just an FYI.

Checkout the original CrashPlan installation post here:

How to install Crashplan on FreeBSD

Ok, so here’s the main problem: CrashPlan installed on /compat/linux environment in FreeBSD is unable to see some system directories, in particular, the user home directories in /usr/home.

I have run into this for, well, years now! For me it was never an issue because my BSD system didn’t have any user content in /home.  Until now. I have lots of little development tests and things I am working on, so I now want to backup my /home folders.  Of course, CrashPlan won’t see those folders.  (WHY? I don’t really know, but if I had to guess, has something to do with either /compat/linux environment or maybe jails or maybe ZFS, or a combination of them all!)

THE FIX (for my system, this solved it and it worked!)

I used nullfs!  Using the “mount_nullfs” command to trick the system into viewing a read only file system in the underlying OS mounts.

1. WHILE NOT IN COMPAT LINUX, use normal “root” user in FreeBSD.

2. Create a directory under /compat/linux

mkdir /compat/linux/null_home

[Extra note] 2B. My user account had its own ZFS filesystem, so I needed a second directory!

mkdir /compat/linux/null_greg

 

3. Next mount using nullfs.

mount_nullfs /usr/home /compat/linux/null_home
mount_nullfs /usr/home/greg /compat/linux/null_greg

(remember, you may not need the second command if all your home folders are contained on one file system)

 

4. Open your CrashPlan and add the newly mounted directories!

I don’t have those set to auto mount, so they’ll need to be remounted on reboot again. (I’ll figure that out later!) But for now I can at least get my FreeBSD home folders backed up in CrashPlan!! Yay!

 

Posted in Backup, FreeBSD Tagged with: , , ,

Outlook cannot save attachment error

A user at one of my clients ran into this today. He couldn’t open any images that were attached to Outlook. (Outlook 2003)  I first just cleared out any temp files in c:\Users\%USERPROFILE%\AppData\Temp folder and reset it to NOT be read only. Also reset all perms in that folder and subdirs.

Then I went to Internet Explorer settings and Reset all settings and cleared out the cache.

Lastly, I found the Microsoft article below, which speaks about the OutlookSecureTempFolder in the registry and they now supply a “Fixit” option.  I just ran that an then his images would open normally!

http://support.microsoft.com/kb/305982/en-US

 

FYI: This was on Windows 7 Pro, IE11, Outlook 2003.

 

Posted in Internet, Microsoft Office, PC Repair, Problems, Windows 7 Tagged with: , ,

Google Apps Multiple Calendars on iOS Sync not working on Exchange Account setup

Ok, this took some time to figure out.  A good example where I don’t use these features, so didn’t know how to do it.
The problem: Google Apps “Exchange” account on iPad doesn’t work well with multiple calendars or shared calendars. (I wasn’t aware of that)  Gmail account setup does work though, I guess.
Why use Exchange then?  Because, then you get “Push” account, instead of long wait times to fetch data.  For email, this is much better!
Fix:  I just figured out a good way to do this though.  Setup Calendars separately!
1. Go go to your Google account setup in the iPad, where it says what to sync , uncheck Calendar.  So you’ll only have Contacts and Email selected.  (say yes if prompted to delete)
2. Under the Mail, Contacts, Calendars settings, Add Account.
Select “Other” at the bottom.
Select under Calendars, “CalDAV”.
3. In the CalDAV account settings, add information like I have in the image below. (use any description you prefer)
Say Next, and you should get an option to select all your calendars to sync.
4. Then in your Caledar app, you should be able to pick any calendars you like.
5. Extra Credit:  Select your primary calendar.  Under the main settings for “Mail, Contacts, Calendar”, scroll down the the Calendar section and pick “Default” calendar you prefer.
Also, worth noting, you may need to “select” the calendars from Google sync online. Their help docs point you to this address:
You can select which calendars are available for syncing.  I did all that beforehand and it didn’t work on the Exchange account setup for Google. I think they have disabled that access, although for email and contacts its working great for me.  But for calendar, I could only get multiple calendars to sync by using a separate CalDAV account in iPad.
Posted in Business, iOS, iPad Tagged with: , , ,

Zend Framework 2 alternative layout template changed in module.php level

I am sure people are finding the correct way to handle this, but I keep ending up at older ZF2 how-to sites on Google searches that have incorrect ways to solve this, when there is more current and better information available now to change a layout template at the module level. In my case, I just wanted to do some testing and swap the layout at the module level to override the default set in config. It was simple actually, but thought I’d share what I did in case others end up finding older information.

Make sure you have the alt_layout.phtml file you need, of course. In the MVC Module.php onBootstrap():

$this->view = $e->getViewModel();
if($maybe_i_want_some_alt_layout == true){
$this->view->setTemplate('layout/alt_layout');
}

 

The above snippet will override the entire module default template.  In my case, I wanted to quickly swap out the template to view differently on an iPad, so I used a check for that, then swapped the layout.  There are better, fancier, more clever, or I guess maybe more “ZF2-friendly” ways to do this, but this was simple solution for my use.

I can still override at the controller level manually if needed. Use this in the actual controller pre dispatch (if setup that way) or in the action. But this is not very flexible. I just had need for a specific action to show a special layout.

$this->layout('layout/special_controller_layout');

Posted in Development, PHP, Zend Framework Tagged with: , ,

No Internet Access, Disable Microsoft Virtual miniport adapter

I don’t know how users have been managing to get into a bind with this, but twice in the last couple weeks I’ve been called by people with no internet access, yet they still have server file shares working.  Everything seems fine in their adapter settings for IP and browser seems fine, and no viruses.  I can even tunnel in SSH to their RDP port 3389 and remote access the computer.

But then I discover this Microsoft Virtual miniport adapter thing in their network connections. What the heck is that? And why is it even there?  (don’t answer, I don’t really care, unless it happens more, and it still won’t prevent people from screwing up their computers.)

Anyway, all I needed to do was disable this thing, and the internet worked just fine, users are happy. Open up an “administrator” command prompt.  (start -> run :  type “cmd” and hit CTL-SHIFT-Enter – that’s how I get to it)

Type these two commands:

netsh wlan stop hostednetwork
netsh wlan set hostednetwork mode=disallow

It complained to me that it needed a reboot, but I didn’t and it worked anyway. I figure they can reboot later and get back to work right now. :)

 

PS: I found some information on the Wireless Hosted Network in Windows 7.

http://msdn.microsoft.com/en-us/library/dd815243(VS.85).aspx

I don’t want to take time to read it, but at a glance, looks like it turns your laptop into a Wifi hotspot. Too bad it screws up you LAN connection in doing so. If I had to guess it probably changed your routing table and that’s why it breaks the internet or any other external network access. Not sure why anyone would want this, but I am sure there’s some interesting use cases I haven’t thought of.

Posted in Business, Internet, Windows 7 Tagged with: ,

Lighttpd url.rewrite rule for Zend Framework 2

I was having some major issues while attempting to run Zend Framework 2 on Lighttpd because my rewrite rules were not as easy to create as Apache.  Actually, I can’t say they are harder, just way different.  Here’s what seems to be working now though:

 

url.rewrite-if-not-file = ( "/(.*)$" => "/index.php?$1", )

By the way, I had to figure this out because, being adventurous, I wanted to run a web stack on iOS using Cydia packages. That way I have a mobile server on an old iPhone available on wifi for my iPad to develop with. Really dorky, but sort-of working so far, and allows me to work while offline without having to cart around a laptop.

 

Posted in Development, Internet, iOS, Linux, PHP Tagged with: , , ,

Recovering Ubuntu 12.04 ZFS on Linux root pool mirror boot failure

ZFS Root Mirror Zpool

Recently I ran into a problem booting my root ZFS mirror. Couldn’t get it to load, and only saw blinking cursor or blank screens. Grub could see the FS though!  I suspect I had a corrupted or out-of-sync zpool.cache file. Don’t ask me how that can happen, because the whole reason I run ZFS is so things like this don’t happen, yet on Linux the zpool.cache can be messed up somehow!  What I *think* had happened, is having something to do with all the recent zpool changes I had made recently. I had a couple pools of drives that I moved out of this system and kept only the root pool mirror for my development system.  So I don’t have answers for why the cache gets messed up, I do think my changes recently had something to do with it.  Probably something I did!

So I booted to thumb drive that had 12.04 server installed. I suppose you could just as easily use a LIVE DVD or USB stick, but you’ll need to apt-get install the zfs packages.

After I booted my thumb server, I updated zfs-linux packages!! Couldn’t get to the drives until it was updated because of the new ZFS format. I think the pool format version was 5000 and the older ZFS utils didn’t work with it. Having installed a handful of FreeBSD and Ubuntu servers with ZFS now, I am thinking better to create root ZFS pools using version=28 for better compatibility. I think that’s the advice of the ZFS On Linux people too.

Anyway, once server had working ZFS, I exported everything, and reimported like below:

zpool export rpool

zpool import -d /dev/disk/by-id -f -N rpool

Now that I had my rpool imported but NOT mounted, I will mount it so I can replace the cache. (NOTE: I am using /mnt/rootfs as my mount point, but instructions online typically have you mounting to /root, just replace as you see best for your system)

mount -t zfs -o zfsutil rpool/ROOT/ubuntu-1 /mnt/rootfs

The guides online tell me, DO NOT MOUNT other file systems! (ok, got it!) Just copy over the zpool.cache. I made a backup first, just in case.

cp /etc/zfs/zpool.cache /mnt/rootfs/etc/zfs/zpool.cache

Then…

Did a chroot into my mirror, updated /etc/default/grub
to show text only boot console.
Did an update-initramfs -c -k all
Ran grub-install /dev/sda and /dev/sdc (which are my mirror drives)
And lastly, update-grub.

Exit chroot environment.
Unchroot and unmounted /mnt/rootfs.
Reboot! Success!

Posted in FreeBSD, Linux, Problems, Ubuntu Tagged with: , ,

Netatalk connection refused from IPV6 hosts

Error showed: dsi_getsess: Connection refused.

Was actually a simple fix.  (FYI, I am running Netatalk 2.2.4 on FreeBSD/PCBSD 9.1)

Open /etc/hosts.allow

Add lines for afpd for your network/hosts. Change to match your environment, of course.

afpd: [ fe80:ca2a:%nfe0 ] : allow
afpd: .1stbyte.lan : allow
afpd: 192.168.0.0/24 : allow

 

In the above, I added my local domain “.1stbyte.lan” and the first part of my IPV6 address that showed in my error log.  The %nfe0 is my network interface, so change to your interface along with your network settings.

The logs showed an error like below:

afpd[85301]: refused connect from fe80::ca2a:14ff:fe2c:5cf3%nfe0
afpd[85301]: dsi_getsess: Connection refused

afpd[85301]: dsi_disconnect: entering disconnected state
afpd[85301]: Disconnected session terminating

 

 

Posted in FreeBSD, Mac, Networking Tagged with: , ,

Error Apache PHP Suexec FastCGI session_start open O_RDWR Permission Denied

Note to self: Fix this Error in Apache PHP Suexec FastCGI session_start open O_RDWR Permission Denied

Warning: session_start(): open(path-to-tmp-dir, O_RDWR) failed: Permission denied(13)

The solution was simple, but in most of my PHP scripts it didn’t even show this error.

The fix:

Go to your /var/tmp directory.
(or wherever you set your session temp files, probably set with session.save_path in php.ini)
Then remove all the sess_* files.
I just ran: rm -rf sess_*
I don’t know if its needed, but a reloaded Apache too.
All was working after that.

Not sure why it happened…

I got some weird errors about Zend/Session/Container  and “invalid type” errors.  All I did was copy my testing code to another server and set it up using FastCGI/Suexec in Apache.  My other server didn’t have that environment and ran Apache the normal way without FastCGI. Anyway, in the Suexec environment it didn’t show the error above till I changed my vhost config to not use FastCGI for PHP, then it finally showed me the error above which led to the simple fix.

For the record I am using FreeBSD 9.1 , Apache 2.2.x and PHP 5.3.8.

 

Posted in FreeBSD, Internet, PHP, Problems, Zend Framework Tagged with: , , ,

SimCity Ridiculous – SimCity 5 Fail and Fail and Fail

The new SimCity game was release a couple days ago. Nice. I guess. Like the rest of us, I can hardly get on the game due to overload and bugs. The game requires me to be online and play on their servers, and even worse, the one thing that really bothers me, my game saves are on one single server only, and only there, online, on that server.  Not any of the other servers they have, just that one server.

Before I get into this further, let me say, on day one of its release, I downloaded it and played it without issue.   Also, I had quite a fun time.  For the most part, its an awesome game.  I plan on spending quite a bit more time having fun with it, creating my cities and expanding my creations over the next 10 years or more.

Ok, notice how I said, “over the next 10 years?” I played SimCity 4 since 2003 and still play it, and will continue to play it.  I have 2 huge regions I have slowly built upon over the last 10 years.  Yeah, think about that.  I basically have a single game save that I have been building upon for 10 years!  SimCity is the kind of game you mold and shape and rework everything, and you grow with it.  It takes a lot of time and its not something you want to dump hundreds of hours into with the possibility of it going away anytime soon.  Its a project, a  long haul project.

What happens if EA decides they don’t want to run their servers?  Just a thought, right. I mean, in 8 or 9 years, I want to open up my region(s) and continue building my cities, right?  What if EA isn’t there?  What if EA is there, but they decide its not profitable to maintain servers.  Well, I lose all that time and creation?

SimCity isn’t an adventure.  It’s not a story.  It’s not a puzzle to be solved and then started over.  It’s not a place to hang out.  It’s not something I want to share either!  It’s not even an empire you build, like in Civilization. It doesn’t have an end point.  It is a game where you build, design, strategically develop, and plan a virtual city.  Its something you start, leave, come back, and continue where you left it.  IT’S A PROJECT!  And it never ends.

Knowing I have this huge project that I have spent so much time tweaking and adjusting to grow my cities just the way I like it, I really, and I mean REALLY, do not want anyone else messing with it.  I just have to wonder, why would I find any joy in building a city that others can take part in?  Why would I want to even share a region?  Other people will just screw up my project I spent so much time creating!

Getting back to the online thing… What happens if I take a 6 month break from SimCity, which I have done, and I come back to my region on my one single server at EA, and they lose it or abandon it, or somehow make that region or even cities in that region unavailable?  Why would i spend my time on that if I can’t come back to it?  Am I supposed to just rely on them backing up?  I hate to say it, but I backup even my cloud files in Gmail, Dropbox, even my Rackspace servers and web sites. They are all backed up, and once in a while I copy them to my own local hard drives. (running on multiple ZFS array’s, of course.)  In addition, with SimCity 4, I have installed on a virtual machine, that’s backed up as well.  That way, in 10 years, I can still run an old OS and play my old game.

Maybe I am just weird. Maybe I should be more social and force myself to enjoy building multi-city regions with other people, and in truth, I *might* like playing and building with my son or daughter.

Also, its not like I won’t play any shared games. I’ll definitely try that. Those would be the times I just want to “get my fix” on SimCity.  But, I don’t play SimCity to solve a problem and spend an hour here or there.  Call Of Duty, for example, I just want to get some quick shoot-em-up time in.  Again, though, that’s not something I am building on top of my previous work.  I’ll post updates over time as I try the multi-city, public regions.

The game has some other flaws that I am very bothered with, but they are not nearly so much a problem to prevent me from enjoying the game.  For example, I really don’t like the tiny city size and small, square cities in the region.  It just looks ridiculous!  I suspect they did this in part to keep the game performance under control, but also, I think it was a way to force the use of specializations and utilization of services in other cities.  I like that idea, but I really don’t like the implementation.  Cities don’t build in square boxes on teh ground, its stupid!  They sprawl outward, and upward.  They eliminated the outward sprawl.

Another flaw, one connection to the region?  What? And only in that place they decide?  What about ther huge open area between CityA and CityB?  That place looks like an awesome spot to put a city.  These cities can’t grow together, connected in several places.  Take a look at any city, in the real world, and you will have many different routes to get from point A to point B, from CityA to CityB.  Well, SimCity 5, you just get that one highway connection.  One connection and huge open area’s you can’t touch.  Sorry, I think that so stupid!

In some ways, SimCity 5 has taken a direction I think will be a problem for replay value. Its forcing multi-user/multi-city play.  As I said above, I want to create.  My region is my creation.  I better be able to come and go as I please, and do so for the next decade or longer!  And I didn’t even get into the saved games on single servers did I???!!!!  Because of the “building” style of play, I want to return to my game.  So far, I can only return to a single server to enjoy it and continue building.  What cloud service out there does that?  How incredibly backwards thinking that is, to only keep my saves on one cloud server. That would be like Google telling you that your email couldn’t be logged into because your data is on a server that’s too busy right now and you will have to wait, or you can start a new email database on a new server, separate from your old one.  What???  EA, if you’re going to take our datafiles out of our control and put them “in the cloud” then you better make them available IN THE CLOUD, NOT JUST ON ONE SERVER!!!!  FAIL, EPIC FAIL!!!!!   Its 2013!!! Diablo 3 had connectivity and issues, sure, but at least you don’t need to pick a server!!!!  Your data is just there, in the cloud.  EA doesn’t have a cloud, they have PATCHY FOG!

Before I finish my rant, let me say what I do like about the game.  Its very “simcity-ish” and in sim-city-style building, its awesome!  For the short time I have been able to play, it was very much fun.  I actually do love having so many interconnected aspects of the cities in the region.  I really like the detailed upgrade paths that the buildings/services can take.  I really like the specializations!!  All really cool stuff and very much worth the hassle to get to play. I really like that direction they have taken.  I just don’t like the online aspect of it.  As I said, maybe I am weird and the majority of us don’t see it that way.

Its a fun game regardless.  I am looking forward to lots and lots of hours playing, once they work out some bugs and add more servers.  (umm…. I won’t even get into that one, but what the heck are they thinking?  run the whole world’s Simcities on not even 10 servers? Seriously?)

 

UPDATE: 3/13/2013

First, thank you to the EA/Maxis people for adding more servers and making the game more reliable and easier to get into. You’re doing a great job!

Second, SIMCITY 5 FAIL still!!!  I don’t think you EA/Maxis people are going to fix this one. But my biggest annoyance is that my cities are only on a single server!  I don’t mind having an online only game (well, sort of) and I don’t mind the some of the other flaws (at least not too much).  I really do mind my data files being on your one single server.  I only look for my “Europe West 1” server and play on that one because that’s where my region is!  I want to keep building on my same region.  Please EA/Maxis, can you just make it a download or provide a transfer mechanism?  At least give me a way to backup my data files and restore them into your servers.  I’d really like that.

Fun game though!  I am enjoying my time in the game, its addicting.  Still haven’t played in public games yet, and don’t have plans to.  Don’t care about playing with others. Sorry.

Posted in Uncategorized

Zend Framework 2 Global Database Adapter Object and Config Variables

I wrote this up on Stackoverflow.com too as an answer to my own question. (links below)

So basically , I want some config variables available app-wide in my Zend Framework 2 apps. I also want a database adapter object available and connection. Controllers don’t have preDispatch by default now.  I used to use Zend_Registry in ZF1 to store and retrieve some values and my “db” object. (my database connection and query tool)  I ran into a several problems moving to ZF2.

One problem I ran into was the global variables or config items.  But what really caused me a problem was the lack of an “init” or “preDispatch” method on the Controller.  In addition, I couldn’t add/get objects and values from __construct of the Controller, because that info isn’t yet available in the process.

I also ran into a problem using Zend\Db and how to make my connection available site-wide.  The answer is to use their new ServiceManager in combination with built-in methods to use the ZF2 EventManager.  Not a big deal once setup.

This is how I did it.

1. Adding config variables , site-wide. (and how to access in controllers)

This was easy, once I saw it work. But depends on the preDispatch below.
In your app root, drop a file in config/autoload. Call it:
things.config.local.php

In this file, we just return an array with config items.

 return array( 	
'things' => array(
        'avalue' => 'avalue1',
        'boolvalue' => true,
        ),

);

So I want access to that in my controller. Assuming you create a config property in the controller, you can access it this way. (you setup that property in preDispatch below for it to work) In my system I’d prefer to retrieve that value like so:

$this->config['things']['boolvalue'];

However, you could just call upon it in an action like this:

$config = $this->getServiceLocator()->get('Config');
echo $config['things']['boolvalue'];

That was easy actually. Not sure how to do that with an ini file, but in my case its not needed and my ini files not not a big deal to move into arrays directly. Problem 1 solved for me!

2. How to get preDispatch in controllers (because __construct wont load config)

My other problem was that I could get access to some objects and/or values at a global level AND have them load when the controller and actions are initialized. As I understand it, its not possible to access service manager config in __construct of controller.

$this->getServiceLocator()->get('Config');

The above won’t work. I believe because ServiceManager isn’t available yet during construct of the controller class. makes sense.

A couple extra steps though, and I can get preDispatch working, similar to ZF1. *THEN* the config stuff works. As well as access to global objects, like database.

In the controller add the below method:

protected function attachDefaultListeners()
{
    parent::attachDefaultListeners();
    $events = $this->getEventManager();
    $this->events->attach('dispatch', array($this, 'preDispatch'), 100);
    $this->events->attach('dispatch', array($this, 'postDispatch'), -100);
}

Then add pre and post methods.

 
public function preDispatch (MvcEvent $e)
{
	// this is a db convenience class I setup in global.php
	// under the service_manager factories (will show below)
    $this->db = $this->getServiceLocator()->get('FBDb');
    // this is just standard config loaded from ServiceManager
    // set your property in your class for $config  (protected $config;)
    // then have access in entire controller
    $this->config = $this->getServiceLocator()->get('Config');
    // this comes from the things.config.local.php file
    echo "things boolvalue: " . $this->config['things']['boolvalue'];
}

public function postDispatch (MvcEvent $e)
{
    // Called after actions
}

Problem 2 solved! Init for controlers.

3. How to use the above with ServiceManager to load a global database adapter object for use in Controller

Okay, the last thing I wanted was access to my db globally. And I wanted it controller-wide so I can call $this->db->fetchAll anywhere.

First setup service manager in global.php.
Also, keep in mind, I won’t be leaving it exactly like this as it is in my global.php file. But it works for now.
Add these array’s to the return array in global.php:

'db' => array(
    'driver' => 'Pdo',
    'dsn'   => 'mysql:dbname=mydb;host=localhost;',
    'username' => 'root',
    'password' => '',
    'driver_options' => array(
        PDO::MYSQL_ATTR_INIT_COMMAND => 'SET NAMES \'UTF8\''
    ),

),
'service_manager' => array(
    'factories' => array(
        'Zend\Db\Adapter\Adapter'
                => 'Zend\Db\Adapter\AdapterServiceFactory',
        'db-adapter' => function($sm) {
                $config = $sm->get('config');
                $config = $config['db'];
                $dbAdapter = new Zend\Db\Adapter\Adapter($config);
                return $dbAdapter;
             },
        'FBDb'  => function($sm) {
                $dba= $sm->get('db-adapter');
                $db = new FBDb\Db($dba);
                return $db;
             },

    ),
),

In the above, I setup the db config, then service_manager has some factories, which are made available when needed in rest of app. In my case, I wanted some convenience for backwards compatibility with some of my old ZF1 code, so I added a custom module called FBDb. I found a wrapper class called ZFBridge by Fabrizio Balliano. Worked great for my needs, you can find here:
https://github.com/fballiano/zfbridge

I took that, modified it a bit, and made a module. So the FBDb object is available in my controllers as my database connection. Same with “db-adapter” if I wanted to utilize it elsewhere.

Anyway, in my controller, I setup “protected $db;” at the start of the class so I have that property available. Then as shown in #2 above , I have preDispatch assigning the FBDb database object to $this->db.

$this->db = $this->getServiceLocator()->get('FBDb');

Then in my action methods, if I want, I can call a record set or db value with this:

$sql = 'select * from customer where cust_nbr between ? and ?';
$rs = $this->db->fetchResults($sql, array('120400', '125250'));

OR, for an array returned:

$rs = $this->db->fetchAll($sql, array('120400', '125250'));

(I added fetchResults to ZFBridge to return only the PDO\Results object form the db query for use in foreach later.)

I know some of this is prbably bad design or “bad OOP”, but it works for me and I like it. I personally don’t lke using pure object based data entities for everything, just some things. Much of the time, I just want to drop a resultset in and be done with it. :)

My original Stackoverflow.com questions and solutions:

http://stackoverflow.com/questions/14128085/zend-framework-2-module-share-variables-between-controllers-onbootstrap/14145872#14145872

http://stackoverflow.com/questions/14107346/zend-framework-2-db-adapter-adapter-query-resultset-like-zf1/14118823#14118823

 

Pastebin of the codes above:  http://pastebin.com/embed_iframe.php?i=33tV7S9Z

 

 

 

Posted in Development, PHP, Zend Framework Tagged with: , , ,

Exclude ZFS filesystems from zfs-auto-snapshot on Ubuntu and remove them

destroy-zfs-snapshots-script

Just ran into this quick blurb…

zfs set com.sun:auto-snapshot=false butank/DATA/backups

So I have backups running to a secondary storage ZFS pool in my server and today I realized my disk space was consumed by ZFS snapshots.  I thought, there’s no way even my backups are using that much data, even with rotations and archives.  Checking “zfs list -o space” confirmed it, there’s were hundreds of gigs used by snapshots.

Found a post by @dajhorn, one of the ZFS On Linux developers and Ubuntu package maintainers. He suggested that zfs  set command above to exclude ZFS filesystems from the zfs-auto-snapshot package/script on Ubuntu.

https://github.com/dajhorn/zfs-auto-snapshot/issues/7

Next… I needed to remove all those auto snapshots. Again, with the help of dajhorn I ended up with this script:

Pastebin here destroy-zfs-auto-snaps-for-fs.sh:
http://pastebin.com/3pLJZa2E

Below syntax is terrible, so use the Pastebin above.

— destroy-zfs-auto-snaps-for-fs.sh —

#!/usr/bin/env bash
if [ -n "${1}" ]
then
 echo "This will *RECURSIVELY* destroy all ZFS auto snapshots (not your manually created snaps). "
 echo "Parent and child filesystem snapshots to be destroyed: ${1}"
 echo "Continue? (y/n)"
 read ANS
if [ $ANS == "y" ]
 then
 echo "Listing snapshots to be destroyed..."
 for ii in $(zfs list -r -t snapshot -o name ${1} | grep @zfs-auto-snap); do echo $ii; done
 echo "The above snapshots will be destroyed, sound like a plan? (y/n)"
 read PLAN
 if [ $PLAN == "y" ]
 then
 for ii in $(zfs list -r -t snapshot -o name ${1} | grep @zfs-auto-snap); do echo $ii; zfs destroy $ii; done
 echo "ZFS Auto snaps for ${1} destroyed!"
 else
 echo "Not a plan then... exiting."
 fi

 else
 echo "Not destroying... exit."
 fi
echo "Done."
else 
 echo "Exiting. You did not provide a ZFS filesystem. (destroy-zfs-auto-snaps-for-fs.sh zpool/some/fs)"
fi

 

Quick notes.  I did find an awesome one-liner here:
http://serverfault.com/questions/340837/how-to-delete-all-but-last-n-zfs-snapshots

But I didn’t want to leave any snapshots, just my preference. I wanted them all cleaned out.  Besides, its easy enough to quickly create a new recursive snapshot for the filesystem desired with “-r” switch.  Then with a custom name, they are not removed as part of the script above.

Happy ZFS-ing!

 

Posted in Backup, Linux, Problems, Ubuntu Tagged with: , , , , ,

Finally got my iPad Mini and I love it

11

My awesome wife brought home an iPad Mini for me last night. :)

My super quick review….  pretty much like most other reviews…

Kicks butt!!  The nerds out there who complain about the non-retina screen, I partially agree with. Regular people won’t care. The nerds complain about the older processor (iPad2 cpu, etc), I agree its not quite as fast as iPad3. Again, only the geeks care.  iPad Mini is also way too expensive if you want LTE and extra memory. Most people will care about that one.  But if you get 16G wifi only, its not *too* bad, and personally, I love what I get in an i-ecosystem over Android or Windows. (one could argue that to death tho, so just leave it as a personal thing.)

But here’s what I think.  The size and comfort and ease of use is so great, it vastly outweighs the issues people complain about.  I find myself constantly reaching for the Mini over the large one already.  You won’t believe how small (yet not too small!) and how nice this feels to hold and use!

I do think I will still use the large one quite a bit, but depends what for.  For *work* stuff, like emailing, server remote ssh, or things I need to do more typing on, or things where a larger view is more helpful, I’ll definitely use the iPad3.  It is slightly faster, and that large screen is nice for some things. Like photo editing and stuff, will be nice here.  Typing on the iPad Mini is just fine for me, but its slightly smaller size isn’t quite as easy.  I found, once I got used to it, typing on the iPad 3 (or iPad Large) screen is pretty easy and quick, but the Mini’s size is just a little small for speed use. Not that I go fast at all, just easier to move a little quicker on the larger screen. For any reading, web research, or video watching, and a lot of gaming (but not all) I’ll prefer to use the mini.  Just depends, but I am finding, 9 out of 10 times I want to grab the iMini.  Its by no means slow and it is so perfect for casual or mobile use.  I can’t wait to take it out on the town for work visits.

Quick note about games on the iPad Mini. I played the new Wraithborne game, Autumn Dynasty (an RTS game), and Galaxy on Fire 2 HD.  They all play great on it, no slowdowns or issues.  The only real slowdown I ran into was in the iTunes store!  But that’s slow even on my iPad Large. :)

BTW, I got the 32G with Verizon version.

Posted in Geek, iPad, Uncategorized

Create ISO image from CD or DVD disk in Linux

Just ran into this solution which works pretty good.  I first found that I could create an ISO image from my CD disc by using cat, like so:

cat /dev/sr0 > /path/to/new.iso

However, using the “right tool for the job” we can use the “readom” command line tool. According to the man page, Readom is used to read or write Compact Discs.  In my case I had a really old Windows NT4 CD I wanted to play with in some VM’s, and of course, its so much faster and easier to work with ISO images, so I used that as my test.  (Why NT4?? Really for no good reason, just for fun. I know its pointless.) I am using Ubuntu 12.04 and I popped in the NT disc.  Ubuntu mounts it automatically so I had to issue a umount first.

sudo umount /dev/sr0

Then, make the ISO from my CD:

sudo readom dev=/dev/sr0 f=/home/greg/files/windows-nt4.iso

That was it!!

 

Posted in Geek, Linux Tagged with: , , ,