Exploring the new Android Design Support Library

I’m a massive fan of material design. Everything about it provides a strong feeling of consistency between applications and as a whole makes them both easier and more aesthetically pleasing to use. Google I/O 2015 saw the introduction of some great new assets to the world of Android — including the new Design Support Library. With the introduction of this, there’s now no excuse not to follow the Material Design Guidelines provided by Google.

Let’s take a look at these new out-of-the-box components that we now have available to us.


Snackbars automatically animate in and out of view

Mostly inheriting the same methods and attributes as the Toast component, the Snackbar is a new component that allows us to show a quick message to the user at the bottom of the screen. Once animated in, the user can either interact with the Action (if one has been set) or dismiss the Snackbar by swiping it off the screen. If neither of these occurs, then it’ll automatically animate off of the screen after the given timeout.

Actions can be added to snackbars for user interaction

For developers, it’s also dead easy to implement with a few lines of code (you don’t want to break the line limit now do you…):

Snackbar.make(mDrawerLayout, "Your message", Snackbar.LENGTH_SHORT)
    .setAction(getString(R.string.text_undo), this)

Note: Whilst you can only display a single Snackbar at any given time, it is possible to ‘queue’ multiple Snackbars to be shown in the order that the show() method is called on each instance.

Floating Action Button

A Floating Action Button (FAB) is a standard component for prompting interaction with a specific action, e.g. adding a new item to a list. It can now be implemented easily throughout our applications, without the use of third-party libraries that were previously an option.

The button can be used as one of two sizes, these are:

Normal (56dp) — This size should be used in most situations.

Mini (40dp) — Should only be used when there is a need for visual continuity with other components displayed on the screen.

Normal (left) and Mini (right) FAB buttons

By default, the FAB will use the application theme accent colour for its background. However, we can easily change the background colour of an individual button, along with many other attributes that we may wish to alter:

  • fabSize – Used to set the size of the button (‘normal’ or ‘mini’)
  • backgroundTint – Used to set the background colour of this instance
  • borderWidth -Used to give the button a border
  • rippleColor – Used to set the colour of the ripple effect when pressed
  • src – Used to set the icon displayed within the FAB

Again, this is super easy to add to our layout file:

     app:fabSize=”normal” />

EditText Floating Labels

The new TextInputLayout allows us to wrap EditText views in order to display floating labels above the EditText field. When an EditText has focus, the assigned hint will ‘float’ above the view to the top-left hand side. This is useful as it still provides context to the user whilst data is being input.

To implement this we just wrap our EditText in the TextInput Layout:


        android:hint="@string/hint_email" />


Error Messages are also supported, which can be shown by simply adding the following to our class:


Note: Setting the error message after setting the ‘errorEnabled’ flag will ensure the size of the layout doesn’t alter when the error message is shown.

Navigation View

The Navigation Drawer is a commonly used component in modern applications, implementing it over and over was never a quick process – until now! The new NavigationView component can simply be placed within ourDrawerLayout (see code example below) and display our navigation items from the referenced menu resource.

The navigation drawer makes it easier for users to navigate the different sections of your application

        android:layout_height="match_parent" />

        app:menu="@menu/drawer" />


This view supports two key attributes:

Header Layout

The optional headerLayout attribute is used to declare a layout to be used for the header section of the Drawer. This is the space shown above our navigational items, a common use is a profile section header.


The menu attribute is used to declare the menu resource to be used for the navigation items in the drawer. It is also possible to call inflateMenu() to inflate a menu programmatically.

Navigation menus can be used with or without sub-headings

As shown above, there are two approaches for our NavigationView menus. The first approach is achieved by using a standard set of grouped checkable items:

<menu xmlns:android="http://schemas.android.com/apk/res/android"
    <group android:checkableBehavior="single">
            android:title="@string/navigation_item_1" />
            android:title="@string/navigation_item_2" />

Here the items are simply shown in a vertical list, no subheadings are displayed and the items all belong in the same group.

The second is similar, but this time we can use a sub-header for our sets of navigation items. As seen below, I have applied a sub-header to the set of items in my menu resource:

<menu xmlns:android="http://schemas.android.com/apk/res/android"
    <group android:checkableBehavior="single">
                <!-- Menu items go here -->

This allows us to separate our menu items by the use of a header. This can be useful if menu items are grouped into specific sets, allowing some form of separation on screen.

It is also possible for us to add menu items programmatically, we just have to call getMenu() to retrieve our menu and then items can be added to that instance.

There are several other important attributes that we can easily change, these are:

  • itemBackground — Used to set the background resource of the menu items
  • itemIconTint — Used to apply a tint to the icons
  • itemTextColor — Used to set the text color of the menu items

In order to capture click events on our menu items we just need to set anOnNavigationItemSelectedListener, this will allow us to react to any touch events that take place on our menu.

Note: For API21+, the NavigationView automatically takes care of scrim protection for the status bar.


The TabLayout is another new component that’ll make our lives easier by providing a scrollable tab bar component for use in for applications. There are several ways in which we can use these:

Fixed tabs filling view width
Fixed tabs, centered in view

Tabs can also be made scrollable

To begin with, we need to add the TabLayout to our layout:

    app:tabGravity="fill" />

Once done, there are several important attributes here that we can set to adjust the appearance of our TabLayout:

  • tabMode – This sets the mode to use for the TabLayout. This can either befixed (all tabs are shown concurrently) or scrollable (show a subset of tabs that can be scrolled through)
  • tabGravity – This sets the Gravity of the tabs, which can be either fill(distribute all available space between individual tabs) or centre(position tabs in the center of the TabLayout)
  • setText() – This method is used to set the text to be displayed on the tab
  • setIcon() – This method is used to set the icon to be displayed on the tab

We also have access to several different kinds of listeners that we can set when using the TabLayout view:

  • OnTabSelectedListener – This can be set to listen for changes on a tabs selected state
  • TabLayoutOnPageChangeListener – Contains the call backs to the corresponding TabLayout, it handles the syncing of tabs selected states. It can be set programmatically without removing the existing listener as the TabLayout is stored weakly within the class
  • ViewPagerOnTabSelectedListener – Contains the callbacks to the corresponding ViewPager, again this handles the syncing of tabs selected states.

Once the view has been added to our layout the implementation is simple, you just need to implement the setupWithViewPager() method to attach the TabLayout to your viewpager:

ViewPager pager = (ViewPager)
pager.setAdapter(new MyPagerAdapter(getActivity().getSupportFragmentManager()));

TabLayout tabLayout = (TabLayout) rootView.findViewById(R.id.sliding_tabs);
tabLayout.addTab(tabLayout.newTab().setText("Tab One"));
tabLayout.addTab(tabLayout.newTab().setText("Tab Two"));
tabLayout.addTab(tabLayout.newTab().setText("Tab Three"));

Note: Tabs should be added either as above or from within a ViewPager. Using setTabsFromPagerAdapter() will cause only tabs that have been added inside of your PagerAdapter to be used, removing any that have been added using the addTab() method.

Coordinator Layout

The CoordinatorLayout builds on-top of the motion effects already provided by adding the ability to transition views based on the motion of others.

To ensure the features of this component work as intended, please ensure that your other support library dependencies are using the latest version. I needed to update RecyclerView to version 22.2.0 in order for it to work properly with some the design support library features.

This layout adds two new attributes that can be used to control the anchoring of a view in relation to other views on screen.

  • layout_anchor — Used to anchor the view on the seam (edge) of another view
  • layout_anchorGravity — Used to set the gravity to the applied anchor

Floating Action Button

We previously looked at the Snackbar and touched on how this is shown on top of all other UI components. However, we are able to link our FloatingActionButton to our Snackbar so that when the bar is shown it pushes the FAB up, rather than overlapping it.

Snackbars can push FABs instead of overlapping them

In order to implement this our FloatingActionBar needs to first be a child of our CoordinatorLayout. Next, you’ll need to ensure that you’ve set thelayout_gravity to declare the desired position of our FAB.

<!-- Your other views -->
        app:fabSize=”normal” />

Finally, when constructing our Snackbar, we just need to pass our CoordinatorLayout as the view parameter, as below:

Snackbar.make(mCoordinator, "Your message", Snackbar.LENGTH_SHORT)

App Bar

The CoordinatorLayout lets us adapt our layouts based on different scroll events that may take place, allowing us to alter the appearance of our views (such as the Toolbar) when the user scrolls the content on the screen.

In order to achieve this, we first need to set the scroll property within thelayout_scrollFlags attribute. This is used to declare whether views should scroll off screen or remain pinned at the top, this property must then be followed by one of the following:

  • enterAlways – Used to enable quick return, where the view will become visible when a downward scroll occurs

Collapsing the toolbar, but keeping the tabs in view
  • enterAlwaysCollapsed – If the corresponding view has a minHeight, then it’ll only enter at this height and expand fully once the scrolling view has reached the top

Collapsing the toolbar completely, including any ‘flexible space’ within the view
  • exitUntilCollapsed – Used to declare that the view should scroll off the screen until it is collapsed before the content begins to exit

Collapsing the toolbars ‘flexible space’, but keeping the toolbar itself in view

Note: Views that are using the scroll flag must be declared before any views that do not. This will ensure that these declared views all exit from the top, in turn leaving all of the fixed views behind.

As shown in the code below, our recycler view uses the layout_behaviorattribute in-order to allow the RecyclerView to work with our Coordinator layout. This means that the layout is able to react to the RecyclerViews scroll events. The code also shows that the Toolbar has its layout_scrollFlagsattribute set, meaning that when the RecyclerView is scrolled, its scroll events are captured and our ToolBar will slide out of view. However, we haven’t declared this attribute for our TabLayout, so this will remain pinned at the top of the screen.

        "@string/appbar_scrolling_view_behavior" />

            app:layout_scrollFlags="scroll|enterAlways" />



You can now wrap a Toolbar component with the newCollapsingToolbarLayout, which allows the layout to collapse as the user scrolls the screens content:

                app:layout_collapseMode="pin" />

When using this component, the layout_collapseMode attribute needs to be set, this can be one of two options.

  • Pin – Setting the collapseMode to pin will cause the toolbar to remain pinned at the top of the screen once the CollapsingToolbarLayout has been fully collapsed.

  • Parallax – Using the parallax mode will allow the content (e.g the image used within an ImageView) to translate vertically whilst the CollapsingToolbarLayout is collapsing. Setting the optionallayout_collapseParallaxMultiplier attribute when using parallax gives control over the translation multiplier on the transition

Another great thing about both of these approaches is that calling setText() directly on the CollapsingToolbarLayout will cause the text size to automatically start larger, shrinking to a smaller size once the CollapsingToolbarLayout has fully collapsed.

Custom Views

It doesn’t end there! You can also define a Behaviour for custom views, allowing callbacks to be received upon onDependentViewChanged() being called. This also allows for better handling of touch events, gestures and dependencies between child views.

So what are you waiting for? Add the library to your dependencies and get cracking!

compile 'com.android.support:design:22.2.0'
发表在 Linux | 留下评论


LVS下的4层负载均横是基于传输层的负载,它一共分为三种模式——NAT DR TUN ,这里我们主要介绍的是DR模式,因为相比较其他两种模式,DR在企业中的应用更加的广泛,可用性和效率性更加的高。


  1. 在搭建我们的DR负载均衡之前,我们先勾画好我们拓扑图,明确各个机器之间的关系,以及数据包的走向。这里我们定义了,一台client ,一台GW ,一台Director分发器和两台server服务器。我们要达到的目的是当我们通过客户端访问路由器的时候显示的内容是在两台服务器之间的切换。

  2. 下面我们开始按照我们的拓扑进行配置。


    [root@localhost ~]# route add default gw dev eth0

  3. 其次,是对路由器网关的配置。我们需要在路由机器是开启IP转发功能。

    [root@localhost ~]# echo 1 > /proc/sys/net/ipv4/ip_forward

  4. 之后就是我们对两台服务器的配置:

    1. 我们需要两台服务器都装上HTTP ,并创建Index.html的测试页面。为了测试效果这里我们两台的测试页面不能写相同。

    2. 我们需要为两台服务器都填上一块lo:1的这样一块网卡。以确保数据包在回包的时候是回给我们的网关。只有这样才是我们DR模式的目的。

    [root@localhost ~]# ifconfig lo:1     // 这里一定要是32

    3. 然后我们需要开启设置:

    [root@localhost ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore     // ARP忽略 默认为0  给为1 直接收广播自己的

    [root@localhost ~]# echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce   //  带收发 lo :1  的数据包  默认为0

    4. 最后还要指定我们的网关到路由GW

    # route add default gw dev eth0    // 指定路由的网关 //


  5. 接下来我们要配置我们的分发器:


    [root@localhost ~]# yum install ipvsadm


    [root@localhost ~]# ipvsadm -A -t -s rr

    [root@localhost ~]# ipvsadm -a -t -r -g //   -g  直接路由

    [root@localhost ~]# ipvsadm -a -t -r -g


    [root@localhost ~]# ipvsadm -Ln

    [root@localhost ~]# ipvsadm -Ln –stats

  6. 接下来就是我们的测试环节:



    # ipvsadm -Ln –stats   可以看到INBYTES  有数据包流入 而OUTBYTES 没有数据流出,证明,发出去的包并没有经过分发器,而是直接经过路由传输,从而减轻了分发器的传输压力。实现了真正的负载均衡。




  • 在实验环境下红帽6中分发器需要指定路由的down 口网关  实际工作或者红帽5中则不需要

发表在 Linux | 留下评论

Android MQTT unable to create client

 MqttClientPersistence persistence = new MqttDefaultFilePersistence(mContext.getApplicationInfo().dataDir);
        mqttclient = new MqttAsyncClient(url.toString(), clientId, persistence);
发表在 Linux | 留下评论

Max MQTT connections

I have a need to create a server farm that can handle 5+ million connections, 5+ million topics (one per client), process 300k messages/sec.

I tried to see what various message brokers were capable so I am currently using two RHEL EC2 instances (r3.4xlarge) to make lots of available resources. So you do not need to look it up, it has 16vCPU, 122GB RAM. I am nowhere near that limit in usage.

I am unable to pass the 600k connections limit. Since there doesn’t seem to be any O/S limitation (plenty of RAM/CPU/etc.) on either the client nor the server what is limiting me?

I have edited /etc/security/limits.conf as follows:

* soft  nofile  20000000
* hard  nofile  20000000

* soft  nproc  20000000
* hard  nproc  20000000

root  soft  nofile 20000000
root  hard  nofile 20000000

I have edited /etc/sysctl.conf as follows:

net.ipv4.ip_local_port_range = 1024 65535  
net.ipv4.tcp_tw_reuse = 1 
net.ipv4.tcp_mem = 5242880  5242880 5242880 
net.ipv4.tcp_tw_recycle = 1 
fs.file-max = 20000000 
fs.nr_open = 20000000 
net.ipv4.tcp_syncookies = 0

net.ipv4.tcp_max_syn_backlog = 10000 
net.ipv4.tcp_synack_retries = 3 
net.core.optmem_max = 20480000

For Apollo: export APOLLO_ULIMIT=20000000

For ActiveMQ:

ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.UseDedicatedTaskRunner=false"

I created 20 additional private addresses for eth0 on the client, then assigned them: ip addr add dev eth0

I am FULLY aware of the 65k port limits which is why I did the above.

  • For ActiveMQ I got to: 574309
  • For Apollo I got to: 592891
  • For Rabbit I got to 90k but logging was awful and couldn’t figure out what to do to go higher although I know its possible.
  • For Hive I got to trial limit of 1000. Awaiting a license
  • IBM wants to trade the cost of my house to use them – nah!


发表在 Linux | 留下评论

Mac OS X上使用Wireshark抓包


Wireshark针对UNIX Like系统的GUI发行版界面采用的是X Window(1987年更改X版本到X11)。Mac OS X在Mountain Lion之后放弃X11,取而代之的是开源的XQuartz(X11.app)。因此,在Mac OS X上安装Wireshark之前,需要先下载安装XQuartz。


XQuartz(XQuartz-2.7.6.dmg)安装完成后,按照提示需要注销重新登录,以使XQuartz作为默认的X11 Server。

安装成功后,在终端输入“xterm –help”可查看命令行帮助,输入“xterm -version”可查看xterm版本信息。在终端输入“xterm”或通过菜单“Applications->Terminal”可启动X Window(XQuartz)的终端(xterm)。


安装OS X 10.6 and later Intel 64-bit版本(Wireshark 1.12.0 Intel 64.dmg)。

安装成功后,在终端输入“wireshark –help”可查看命令行帮助,输入“wireshark –v”可查看wireshark版本信息。




如果事先已经安装好XQuartz(X11),则会引导选择X11.app所在路径。——最新版本的Wireshark 1.12.4 Intel 64.dmg貌似会自动识别已安装的X11路径而无需手动指定。

点击Browse到“/Applications/Utilities/XQuartz.app”,将XQuartz作为Wireshark的X11前端。——最新版本的Wireshark 1.12.4 Intel 64.dmg貌似会自动识别已安装的X11路径而无需手动指定。


关于Mac OS X升级Yosemite后无法启动Wireshark的问题

Mac OS X升级Yosemite后,无法启动Wireshark,是因为Yosemite将X11外链文件夹从原来的 /usr 移动到了/opt下。可通过以下三种方式解决:

– 解决方法1:重装或升级XQuartz

– 解决方法2:将 /opt/X11 拷贝一份到 /usr/X11:sudo mv /opt/X11 /usr/X11

– 解决方法3:将 /opt/X11 软链一份到 /usr/X11:sudo ln -s /opt/X11 /usr/X11


点击启动首页中的“Interface List”(相当于菜单“Capture->Interfaces”),可以查看本机活跃网卡(通过系统菜单Apple->About This Mac->More Info->Overview->System Report或终端命令ifconfig也可查看)。

由于本人iMac使用无线接入,故en1(Wireless NIC)和lo0(本机环路)活跃(active),有线网卡en0(Wired NIC)未使用(inactive)。注意:MBP上无线网卡是en0,可通过ifconfig区分识别。

这里可以直接勾选en1,点击Start按钮使用默认Capture Options(默认勾选了混杂模式[promiscuous mode])进行抓包;可以在启动首页点选“Wi-Fi:en1”打开“Edit Interface Settings”,设置(例如可配置Capture Filters只捕捉指定类型或条件的数据包)或使用默认Capture Options开始抓包;也可以从菜单“Capture->Options”着手,选择网卡设置选项开始抓包。

这里选择了无线网卡(Wi-Fi: en1),怎么抓到的是以太网包(Ethernet Packet)呢?实际上它是某些BSD提供的伪以太网头(fake Ethernet headers)。如果要抓取IEEE802.11无线包(Beacon Frame等),则需要开启监听模式[monitor mode],并且将Link-layer header type选为802.11系列选项。

另外,在Display Filters中填写了“http”,将从捕捉到的数据包中过滤出并只显示HTTP协议包。


在“Edit Interface Settings”或“Capture Options”中勾选“Capture packets in monitor mode”开启监听模式,然后勾选Link-layer header type为“802.11”开始抓取无线包。


(1)iMac开启Monitor Mode时,可能会阻塞网卡导致上不了网,参见下文相关说明。


(3)常见802.11无线帧(802.11 frames)类型:

(1)Management Frame

Type/Subtype: Beacon frame (0x0008,Bit[5:0]=00,1000B)

Type/Subtype: Probe Response (0x0005,Bit[5:0]=00,0101B)

(2)Control Frame

Type/Subtype: 802.11 Block Ack (0x0019,Bit[5:0]=01,1001B)

Type/Subtype: Request-to-send (0x001b,Bit[5:0]=01,1011B)

Type/Subtype: Clear-to-send (0x001c,Bit[5:0]=01,1100B)

Type/Subtype: Acknowledgement (0x001d,Bit[5:0]=01,1101B)

(3)Data Frame

Type/Subtype: Data (0x0020,Bit[5:0]=10,0000B)

Type/Subtype: Null function (No data) (0x0024,Bit[5:0]=10,0100B)

Type/Subtype: QoS Data (0x0028,Bit[5:0]=10,1000B)

Type/Subtype: QoS Data + CF-Ack + CF-Poll (0x002b,Bit[5:0]=10,1011B)




  • 使用Mac的网络共享功能将Mac的网络通过WiFi共享给iPhone连接;
  • 使用代理软件(例如Charles)在Mac上建立HTTP代理服务器。


苹果在iOS 5中新引入了“远程虚拟接口(Remote Virtual Interface,RVI)”的特性,可以在Mac中建立一个虚拟网络接口来作为iOS设备的网络栈,这样所有经过iOS设备的流量都会经过此虚拟接口。此虚拟接口只是监听iOS设备本身的协议栈(但并没有将网络流量中转到Mac本身的网络连接上),所有网络连接都是iOS设备本身的,与Mac电脑本身联不联网或者联网类型无关。iOS设备本身可以为任意网络类型(WiFi/2G/3G),这样在Mac电脑上使用任意抓包工具(tcpdump、Wireshark、CPA)抓取RVI接口上的数据包就实现了对iPhone的抓包。

Mac OS X对RVI的支持是通过终端命令rvictl提供的,在终端(Terminal)中输入“rvictl ?”命令可查看帮助:


rvictl Options:

-l, -L                     List currently active devices

-s, -S                     Start a device or set of devices

-x, -X                    Stop a device or set of devices


(2)使用rvictl -s命令创建虚拟接口

首先,通过MFI USB数据线将iPhone连接到安装了Mac OS+Xcode 4.2(or later)的Mac机上。iOS 7以上需要搭配Xcode 5.0(or later),抓包过程中必须保持连接。


接着,使用“rvictl -s”命令创建RVI接口,使用iPhone的UDID作为参数。


$rvictl -s <UDID>


创建成功后,在终端通过ifconfig命令可以看到多了一个rvi0接口。当有多个iOS设备连接iMac时,依次是rvi1,rvi2…,使用“rvictl -l”命令可以列出所有挂接的虚拟接口。

在Wireshark首页选择rvi0,使用默认的Capture Options即可开始对iPhone进行抓包。





(3)使用rvictl -x命令删除虚拟接口

使用”rvictl -x“命令删除RVI接口,使用iPhone的UDID作为参数。


$rvictl -x <UDID>


7.关于监控模式(Monitor Mode)

(1)《Wireshark FAQ》:Q 10.1: How can I capture raw 802.11 frames, including non-data (management, beacon) frames?

NOTE: an interface running in monitor mode will, on most if not all platforms, not be able to act as a regular network interface; putting it into monitor mode will, in effect, take your machine off of whatever network it’s on as long as the interface is in monitor mode, allowing it only to passively capture packets.

This means that you should disable name resolution when capturing in monitor mode; otherwise, when Wireshark (or TShark, or tcpdump) tries to display IP addresses as host names, it will probably block for a long time trying to resolve the name because it will not be able to communicate with any DNS or NIS servers.

(2)《AirSnort FAQ》:Q 3: What is the difference betwen monitor and promiscuous mode?

Monitor mode enables a wireless nic to capture packets without associating with an access point or ad-hoc network. This is desirable in that you can choose to “monitor” a specific channel, and you need never transmit any packets. In fact transmiting is sometimes not possible while in monitor mode (driver dependent). Another aspect of monitor mode is that the NIC does not care whether the CRC values are correct for packets captured in monitor mode, so some packets that you see may in fact be corrupted.


实际上,我在OS X 10.9.4上安装XQuartz+Wireshark非常顺利,没有出现什么诸如无法发现网卡(nointerface available)的权限问题。

安装前执行“ls -l /dev/bpf*”,bpf0/bpf1/bpf2/bpf3只有“rw——-”权限;安装完成时,再次执行“ls -l /dev/bpf*”,发现权限已经升级为了 “rw-rw—-”,即Wireshark已经给当前安装使用Wireshark的user(administrator)所在的group配置了rw权限。相当于执行过“sudo chmod g+rw /dev/bpf*”,并且已经被配置到随机启动,下次开机不用重新执行该命令。


/CapturePrivileges– you must have sufficient privileges to capture packets, e.g. special privileges allowing capturing as a normal user (preferred) or root / Administrator privileges.

In order to capture packets under BSD (including Mac OS X), you must have read access  to the BPF devices in /dev/bpf*.

Enabling and using the “root” user in Mac OS X

Platform-Specific information about capture privileges

Howto securely configure Mac OS X for network packet sniffing with Wireshark



以第4节中GET http://blog.csdn.net/phunxm为例,仅做简单的分析。


(1)Destination Mac为00:00:0c:07:ac:24(CISCO All-HSRP-routers_24)
socket编程通常在IP层以上进行,一般不关心MAC地址。我们的电脑也不可能与CSDN博客服务器直接相连,一般要经过很多中间节点(hops)。根据邻居协议,目的MAC地址为下一跳(next hop,[default] gateway/Router)的MAC地址。
通过Network Utility或“route -n get default”或“netstat -rn”命令可获得默认网关为10.64.66.1。通过“arp -a”显示“? ( at0:0:c:7:ac:24 on en1 ifscope [ethernet]”。
(2)Destination IP为10.14.36.100

这是因为内网使用了HTTP Proxy,是proxy.pac基于域名判断出口返回的Proxy Server的IP地址。


收包HTTP/1.1 200 OK)的目的MAC/IP地址为接收方(本机)的MAC/IP地址。

(1)源IP地址为Proxy Server的IP地址,同上。


通过Windoze的“route print”或Mac OS X/UNIX的 “netstat -r”可查看路由表(Routing Tables)信息。通过Windoze的“tracert”或Mac OS X/UNIX的“traceroute”查看本机到代理10.14.36.100之间的路由信息,第一跳并非默认网关(,而是10.64.66.2!







Getting a Packet Trace with Sniffing Tools under OS X

Wireshark 数据包分析实战

Wireshark capture under Mac OS X

Launching Wireshark 1.10.0 on Mac OS X Mountain Lion

Mac OS X Lion使用Wireshark远程抓包


iOS APP网络分析之RVI》《在iOS设备上使用RVI进行抓包


升级iOS7后利用RVI和Wireshark抓包失效?》《Mavericks- cannot capture from iPhone using RVI


About Wireless Diagnostics》 《OSX Lion Wi-Fi Diagnostics


Mac OS安装Aircrack》《Aircrack 无线破解详细解说》《Aircrack-ng破解WEP、WPA-PSK加密利器

BT5 + Wireshark玩wifi捕获和中间人攻击》《BT5 aircrack-ng破解无线密码(wpa/wep)

发表在 Linux | 留下评论

Best Practices for Building Angular.js Apps

Burke Holland had a fantastic post explaining how Angular loads an application and comparing the merits of browserify vs require.js in an Angular app.

I’ve worked with Angular on quite a few apps at this point, and have seen many different ways to structure them. I’m writing a book on architecting Angular apps right now with the MEAN stack and as such have researched heavily into this specific topic. I think I’ve set on a pretty specific structure I’m very happy with. It’s a simpler approach than what Burke Holland has proposed.

I must note that if I was on a project with his structure, I would be content. It’s good.

Before we start though, the concept of modules in the world of Angular can be a bit confusing, so let me lay out the current state of affairs.

What modules are in JavaScript

JavaScript comes with no ability to load modules. A “module” means different things to different people. For this article, let’s use this definition:

Modules allow code to be compartmentalized to provide logical separation for the developers. In JavaScript, it also prevents the problem of conflicting globals.

People new to JavaScript get a little confused about why we make such a big deal about modules. I want to make one thing clear: Modules are NOT for lazy-loading JavaScript components when needed. Require.js does have this functionality, but that is not the reason it is important. Modules are important due to the language not having support for it, and JavaScript desperately needing it.

A module can be different things. It could be Angular, lodash (you’re not still using underscore, are you?), shared code in your organization, some gist you found online, or separating features out inside your codebase.

JavaScript doesn’t support modules, so we’ve traditionally had a few various approaches. (Feel free to skip this next section if you understand JavaScript modules)


Let me illustrate the problem. Let’s say you want to include jQuery in your project. jQuery will define the global variable ‘$’. If, in your code, you have an existing variable ‘$’ those variables will conflict. For years, we got around this problem with a .noConflict() function. Basically .noConflict() allows you to change the variable name of the library you’re using.

If you had this problem, you would use it like this:

using .noConflict()

This has been a common practice in most JavaScript libraries, but it’s not a fantastic solution. It doesn’t provide very good compartmentalizing of code, it forces you to declare things before you use them, and it requires the imported code (either a library or your own code) to actually implement a .noConflict() function.

If that’s confusing, read up on it. It’s important to understand the problem before you continue onto the solutions below.

Nobody was happy with .noConflict(), so they started looking into other ways to solve the problem. We have 4 solutions worth mentioning in this context:

  • Require.js (Implementation of AMD)
  • Browserify (Implementation of CommonJS)
  • Angular dependency injection
  • ES6 modules

Each one has its pros and cons, and each works quite a bit differently. You can even use 1 or 2 in tandem (Burke used 2). I’ll cover what each does, how they work with Angular, and which one I suggest.

Sample App

Let’s get a little Angular app together so we can talk about it.

Here is a simple app that lists users off Github.
The code is here, but it’s the completed version we will build in this post. Read through for no spoilers!

All the JavaScript could be in this one file:

Initial Angular app with all code in one file

First we declare an ‘app’ object that is our module. We then define a service ‘GithubSvc’ with one function that can serve us users from Github.

After that, we define a controller that uses the service to load that array into $scope. (This is the HTML page that renders it)

Splitting into separate files

The trouble is that this code is all in one file. Totally unreasonable for a real app. Maybe I’m a curmudgeon, but when I first started looking at Angular and the code samples all showed how to do this, all I wanted to see was a real world solution with proper separation.

I would like to have this code in a structure like this:


Note: If this app got large, it might make sense to have a separate ‘github’ module as well.

The alternate way to do this would be to split things out by functionality rather than part of the codebase:


I don’t have a strong preference either way. Probably very large apps would benefit from the former, and smaller ones the latter.

Regardless, without using a module loader like browserify or require.js, we would have to add a script tag for every one of these files. That’s a no go. That could easily grow to hundreds of files.

There are performance reasons why you don’t want to have tons of script tags too. The browser does pipeline them, but it can only do so many at a time. They have overhead, and the latency would be killer to our friends outside of California.

So here is the goal:

We need a way to have many Angular files in dev, but they need to be loaded into the browser in bulk (not a script tag for each one).

This is why people look to module loaders like require.js or browserify. Angular allows you to logically separate out code, but not files. I’m going to show an easier way, but first let’s examine the available module loaders.

Require.js — Too complicated

Require.js was the first major push towards coming up with a consistent way to have modules inside of JavaScript. Require.js allows you to define dependencies inside a JavaScript file that you depend on. It runs inside the browser and is capable of loading modules as needed.

It accomplishes 2 general tasks, loading of modules and handling the load order.

Unfortunately it’s really complicated to setup, requires your code to be written in a specific way, certainly has the steepest learning curve, and can’t deal with circular dependencies well — and that can happen when trying to use a module system on top of Angular.

Burke Holland covered the issues with using require.js with Angular very well, so I encourage you to read that for a clearer reason why you should not use Angular with require.js.

Working with RequireJS and AngularJS was a vacation on Shutter Island. On the surface everything looks very normal. Under that surface is Ben Kingsleyand a series of horrific flashbacks. — Burke Holland

The ability for require.js to load modules on demand is also something that won’t work with Angular (at least, in a reasonable situation). That seems to be something people want, but I’ve certainly never worked on a project that needed it.

I want to emphasize that last point as people get this wrong: Module systems are not so that you only load the code you need. Yes require.js does do that, but it’s not why require.js is useful. Modules are useful to logically separate code for developers to reason about it easier.

In any case, it’s a bad solution and I won’t show you how to do it. I bring it up because people often ask me how to integrate require.js with Angular.

Browserify — A much better module loader

Where require.js has the browser load the modules, browserify runs on the server before it runs in the browser. You can’t take a browserify file and run it in a browser, you have to ‘bundle’ it first.

It uses a similar format (and is almost 100% compatible with) the Node.js module loading. It looks like this:

Browserify example

It’s a really pretty, easy to read format. You simply declare a variable and ‘require()’ your module into it. Writing code that exports a module is very easy too.

In Node, it’s great. The reason it can’t work in the browser, however, is that it’s synchronous. The browser would have to wait when hitting one of those require sections, then make an http call to load the code in. Synchronous http in a browser is an absolute no-no.

It works in Node since the files are on the local filesystem, so the time it takes to do one of those ‘requires()’ is very fast.

So with browserify, you can take code like this and run it with browserify and it will combine all the files together in a bundle that the browser can use. Once again, Burke’s article covers using browserify with Angular very well.

By the way, if everything I just said about browserify is confusing, don’t worry about it. It’s certainly more confusing than the solution I’m about to propose.

It is a great tool I would jump to use on a non-Angular project. With Angular, however, we can do something simpler.

Angular Dependency Injection — Solves most of our problems

Go back and look at our sample app’s app.js. I want to point out a couple of things:

It doesn’t matter what order we create the service or the controller. Angular handles that for us with its built-in Dependency Injection. It also allows us to do things like mocking out the service in a unit test. It’s great, and my number one favorite feature inside Angular.

Having said that, with this method, we do need to declare the module first to use that ‘app’ object. It’s the only place that order of declarations matter in Angular, but it’s important.

What I want to do, is simply concatenate all the files together into one, then require just that JavaScript file in our HTML. Because the app object has to be declared first, we just need to make sure that it’s declared before anything else.

Gulp Concat

To do this, I will be using Gulp. Don’t worry about learning a newfangled tool though, I’m going to use it in a very simple way and you can easily port this over to Grunt, Make, or whatever build tool you want (shockingly, even asset pipeline). You just need something that can concat files.

I’ve played around with all the popular build systems and Gulp is far and away my favorite. When it comes to building css and javascript, specifically, it’s bliss.

You might be thinking I’m just replacing one build tool (browserify) with another (gulp), and you would be correct. Gulp, however, is much more general purpose. You can compose this Gulp config with other tools like minification, CoffeeScript precompilation (if you’re into that sort of thing), sourcemaps, rev hash appending, etc. Yes it’s nothing browserify can’t do, but once you learn how to do it with Gulp you can do the same on any other asset (like css). Ultimately it’s much less to learn.

You can use it to process png’s, compile your sass, start a dev node server, or running any code you can write in node. It’s easy to learn, and will provide a consistent interface to your other developers. It provides us a platform to extend on later.

I would much rather just type ‘gulp watch’ and have that properly watch all my static assets in dev mode than have to run ‘watchify’, a separate node server, a separate sass watcher, and whatever else you need to keep your static files up to date.

First I’ll install Gulp and gulp-concat (gotta be in the project and global):

$ npm install --global gulp
$ npm install --save-dev gulp gulp-concat

By the way, you’ll need a package.json in your app and have Node installed. Here’s a little trick I do to start my Node apps (npm init is too whiny):

$ echo '{}' > package.json

Then toss in this gulpfile.js:


This is a simple task that takes in the JavaScript files in src/ and concatenates them into app.js. Because it expects this array, any file named module.js will be included first. Don’t worry too much about understanding this code, when we get to minification I’ll clear it up.

If you want to play along at home, use these files, then run ‘gulp js’ to build the assets. Donezo.

For more on Gulp, read my article on setting up a full project with it.

Icky Globals

We can do better. You know how you create that ‘app’ variable? That’s a global. Probably not a problem to have one ‘app’ global, but it might be a problem when we grow to have more and more modules, they may conflict.

Luckily Angular can solve this for us very easily. The function angular.module() is both a getter and a setter. If you call it with 2 arguments:

angular.module as a setter

That’s a setter. You just created a module ‘app’ that has ‘ngRoute’ as a dependency. (I won’t be using ngRoute here, but I wanted to show what it looks like with a dependent module)

Calling that setter will also return the module as an object (that’s what we put into var app). Unfortunately you can only call it once. Disappointingly, getting this stuff wrong throws nasty error messages that can be frustrating to newbies. Stick to the xxx method and all will be good though.

If we call angular.module() with a single argument:

angular.module getter

It’s a getter and also returns the module as an object, but we can call it as many times as we want. For this reason, we can rewrite our components from this:

Global module service

Into this:

No globals involved

The difference is subtle and might seem innocuous to new JavaScript developers. The advanced ones are nodding along now though. To maintain a large JavaScript codebase is to prevent the usage of globals.

To you pedants: I realize that there is still a global ‘angular’ object, but there’s almost certainly no point in avoiding that.

Here we have a pretty well functioning way to build the assets, but there are a few more steps we need to get to the point of a fine-tuned build environment. Namely, it’s a pain to have to run ‘gulp js’ every time we want to rebuild ‘app.js’.

Gulp Watch

This is really easy, and I think the code speaks for itself (Lines 10-12):

Gulp with watching

This just defines a ‘gulp watch’ task we can call that will fire off the ‘js’ task every time a file matching ‘src/**/*.js’ changes. Blammo.


Alright, let’s talk minification. In Gulp we create streams from files (gulp.src), then pipe them through various tools (minification, concatenation, etc), and finally output them to a gulp.dest pipe. If you know unix pipes, this is the same philosophy.

In other words, we just need to add minification as a pipe. First, install gulp-uglify to minify:

$ npm install -D gulp-uglify
Gulp minification

But we have a problem! It has munged the function argument names Angular needs to do dependency injection! Now our app doesn’t work. If you’re not familiar with this problem, read up.

We can either use the ugly array syntax in your code, or we can introduceng-gulp-annotate.

NPM install:

$ npm install -D gulp-ng-annotate

And here’s the new gulpfile:

Gulp minification with ng-annotate

I hope you’re starting to see the value in Gulp here. How I can use a conventional format of Gulp plugins to quickly solve each of these build problems I am running into.


Everyone loves their debugger. The issue with what we’ve built so far is that it’s now this minified hunk of JavaScript. If you want to console.log in chrome, or run a debugger, it won’t be able to show you relevant info.

Here’s a Gulp task that will do just that! (Install gulp-sourcemaps)


Why Concat is Better

Concat works better here because it’s simpler. Angular is handling all of the code loading for us, we just need to assist it with the files. So long as we get that module setter before the getters, we have nothing to worry about.

It’s also great because any new files we just add into the directory. No manifest like we would need in browserify. No dependencies like we would need in require.js.

It’s also just generally one less moving part, one less thing to learn.

What we built

Here is the final code. It’s an awesome starting point to build out your Angular app.

  • It’s got structure.
  • It’s got a dev server.
  • It’s got minification.
  • It’s got source maps.
  • It’s got style. (The Vincent Chase kind, not the CSS kind)
  • It doesn’t have globals.
  • It doesn’t have shitloads of <script> tags.
  • It doesn’t have a complex build setup.

I tried to make this not about Gulp, but as you can tell: I freaking love the thing. As I mentioned earlier, you could achieve a similar setup with anything that can concat.

If there is interest, I could easily extend this to add testing/css/templates/etc. I already have the code. EDIT: https://github.com/dickeyxxx/angular-boilerplate

Third-party code

For third-party code: if it’s something available on a CDN (Google CDN, cdnjs, jsdelivr, etc), use that. If the user has already loaded it from another site, the browser will reuse it. They also have very long cache times.

If it’s something not available on a CDN, I would still probably use a new script tag but load it off the same server as the app code. Bower is good for keeping these sorts of things in check.

If you have a lot of third-party code, you should look into minifying and concatenating them like above, but I would keep it separate from your app code so you don’t have just one huge file.

ES6 Modules — The real solution

The next version of JavaScript will solve this problem with built-in modules. They worked hard to ensure that it works well for both fans of CommonJS (browserify) and AMD (require.js). This version is a ways out, and you probably won’t be able to depend on the functionality without a shim of some kind for at least a year, probably a few. When it does come out, however, this post will be a relic explaining things you won’t need to worry about (or at least it’ll be horrifically incorrect).

Angular 2.0

It’s worth mentioning that Angular 2.0 will use ES6 modules, and at that point we’ll be in bliss. It’s nowhere close to release though, so for now, if you want to use Angular, you need a different option. Angular 2.0 will be a dream. It’s going to look a lot more like a series of useful packages than a framework, allowing you to pick and choose functionality, or bake them into an existing framework (like Ember or Backbone).

Angular 2.0 will use a separate library di.js that will handle all of this. It’s way simpler, and it’s only a light layer on top of ES6 modules. We should be able to easily use it in all apps, not just Angular apps. The unfortunate thing for you is that you will need to deal with the crufty state of affairs with JavaScript modules until then.

Man. I love all these great ways JavaScript is improving, but god damn is it a lot to keep learning.

If you’d like to learn more about Angular, check out my book on creating apps with the MEAN stack.

P.S. I have some code samples you can use to asynchronously load an Angular app. Any interest in reading about that? EDIT:https://github.com/dickeyxxx/ng-async

发表在 Linux | 留下评论


Requiring vs Browserifying Angular

One of the aspects of Angular that seems to appeal to a multitude of people is its opinions on how you structure an application. Usually we consider opinions to be bad, since developers don’t want your ideas on what constitutes “correct” application architecture thrust upon them.

In the case of JavaScript, it seems that there was a mass of folks waiting for someone – anyone – to have a strong opinion on which enterprises could standardize and applications could be built, scaled and maintained by large and ever changing teams. In the end, we needed more than a foundation, we needed building plans.


Angular’s Blueprint For Applications

The blueprint Angular offers is fundamentally quite simple – JavaScript doesn’t have a module system, so Angular provides one for you. Angular ensures that all of your JavaScript code is ready, loaded and available when your application runs. It does this primarily via dependency injection.

Consider a hypothetical, super simple application. There is one partial view. It has a corresponding controller. This controller in turn has a service injected into it for data access. Whenever the application runs, Angular makes sure that all of these “string” representations of actual modules are injected as objects.

// using Angular Kendo UI for UI components and data layer abstraction
(function () {

  var app = angular.module('app', ['ngRoute']);

  // the routeProvider is injected here (Requires Angular.Route)
  app.config(['$routeProvider', function ($routeProvider) {
      templateUrl: 'partials/home.html',
      controller: 'HomeController'
      redirectTo: '/home'

  app.controller('HomeController', ['$scope', 'productsDataSource', function($scope, $productsDataSource) {

    $scope.title = 'Home';
    $scope.productsDataSource = $productsDataSource;

    $scope.listViewTemplate = '<p>{{ ShipCity }}</p>';


  app.factory('productsDataSource', function () {
    new kendo.data.DataSource({
      type: 'odata',
      transport: {
        read: 'http://demos.telerik.com/kendo-ui/service/Northwind.svc/Orders'
      pageSize: 20,
      serverPaging: true


There is a lot going on here:

  • Declare the application module;
  • Create a factory which returns a Kendo UI DataSource;
  • Create controllers for partials injecting the DataSource into HomeCon.troller;
  • Define routes and match partials with controllers

The brilliant thing about Angular is that it mostly doesn’t matter in what order you do these things.

As long as the first app module exists, you can create any of the subsequent factories, controllers, routes or any of the rest in any order. Angular is then smart enough to look at your dependencies and load them for you, even if you specified the dependency after the dependent module. If you have been writing JavaScript for any amount of time, you know what a huge problem this solves.

Application Structure vs Physical Project Structure

At this point it at least appears as though we can create an application with some actual sanity in JavaScript. However, this app is already pretty verbose, and it does virtually nothing. Can you imagine what our file would look like in a real world app? Yikes!

The next logical step would be to break these controllers, services, and anything else we can out into separate files. This would be the physical project structure that mimics the coded one. We generally have two options here – Browserify and RequireJS


That “app” object is really the key to everything that Angular is going to be doing. In normal usage, Angular assumes that the document will be ready by the time the application is “bootstrapped”. According to thedocumentation, Angular does “automatic initialization” on theDOMContentLoaded event.

It also says, “or when the angular.js script is evaluated if at that time document.readyState is set to complete“. Is it just me, or does that last sentence make zero sense? In any event, the steps Angular typically goes through whenever the DOM is ready are:

  • loads the module specified by the ng-app attribute;
  • creates the application injector – which is that thing which injects objects into other objects based on their string value;
  • compiles the HTML using whatever element contains the ng-appattribute as the root of the application and reads down the DOM tree from there.

This is how Angular is normally used. As long as all our scripts are loaded before DOMContentLoaded (think of this as document.ready), everything will be good. This makes Browserify a great solution for breaking Angular apps out into different physical files.

Using the above example, we could break down the files into the following structure…

  • app
    • partials
      • home.html
    • controllers
      • homeController.js
    • services
      • productsDataSource.js
    • app.js

Browserify allows the use of CommonJS modules in the browser. That means that each “module” needs to export itself so that it can berequired by the others.

The homeController.js file would be:

// controllers/homeController.js

module.exports = function() {

  return function ($scope, $productsDataSource) {
    $scope.title = 'Home';
    $scope.productsDataSource = $productsDataSource;

   $scope.listViewTemplate = '<p>#: ShipCity #</p>';


The productsDataSource.js factory is similarly simple:

// services/productsDataSource.js

module.exports = function () {
  // the productsDataSource service is injected into the controller
  return new kendo.data.DataSource({
    type: 'odata',
    transport: {
      read: 'http://demos.telerik.com/kendo-ui/service/Northwind.svc/Orders'
    pageSize: 20,
    serverPaging: true

The app.js file is where all the magic happens:

// app.js

// require all of the core libraries

// pull in the modules we are going to need (controllers, services, whatever)
var homeController = require('./controllers/homeController');
var productsDataSource = require('./services/productsDataSource');

// module up
var app = angular.module('app', [ 'ngRoute', 'kendo.directives' ]);

// routes and such
app.config(['$routeProvider', function($routeProvider) {
      templateUrl: 'partials/home.html',
      controller: 'HomeController'
      redirectTo: '/home'

// create factories
app.factory('productsDataSource', productsDataSource);

// create controllers
app.controller('HomeController', ['$scope', 'productsDataSource', homeController]);

And then, with all the command line skill in the world…

$> watchify js/app/**/*.js -o build/main.js

Watchify is a little utility which watches directories and “browserifys” all your code. I’ve taken some liberties here in assuming that you already have at least an awareness of browserify and what it is/does.

Some of this I like, and some of it makes me want to change my major.

I love how you can just require in vendor libraries in the app.js file. Beyond that, Browserify respects the order in which you require them in. Amazing.

I loathe the fact that I’m still manually creating controllers, factories and what not in the app.js file. It seems like I should be able to do this in the modules and pull them in. As it is, all my “Angular” code is really in the app.js file and every other file is just JavaScript. Well, it’s all just JavaScript so maybe I should shut up about it.

All in all, I like how Angular works with Browserify. I’m going to go out on a limb and say that Angular works pretty seamlessly with Browserify and I enjoyed working with it.

Next lets talk about something that I very much did not enjoy; RequireJS and Angular.



I love RequireJS. I have written about it a bit, and use it in virtually all of my projects, both web and hybrid. I prefer it to Browserify. I believe, in my most humble of developer opinions, that RequireJS is the best way to module.


Working with RequireJS and AngularJS was a vacation on Shutter Island. On the surface everything looks very normal. Under that surface is Ben Kingsley and a series of horrific flashbacks.

The issue at the core of this whole debacle is that Angular is doing things on DOM ready and doesn’t want to play your async games. Since RequireJS is all about async (AMD = Asynchronous Module Definition), reality begins to crumble around you as you try to put the pieces together.

Requiring Angular

Due to the async loading, the whole ng-app attribute is out. You cannot use it to specify your Angular app. This really tripped me up because it was the only way I knew how to Angular.

The second thing that is an issue is that darn app module. You can’t pass it around very easily without creating some crazy circular dependencies. This is an area of RequireJS that you want no part of.

There are plenty of blog posts on how to use Angular with RequireJS, but half of them I found to be incomplete and the other half looked like way more work than I wanted to do. What I ended up going with was something put together by Dmitry Eseev. I found his solution to be the most scalable and required the least amount of setup.

Based on his article, I came up with the following structure for the application…

  • app
    • partials
      • home.html
    • controllers
      • index.js
      • module.js
      • homeController.js
    • services
      • index.js
      • modules.js
      • productsDataSource.js
    • app.js
    • main.js
    • routes.js

Let’s start with the main.js file which requires in all vendor libraries (Angular, Kendo UI, jQuery) and shim’s the main app module. All of this is simply to make sure that the right files are loaded and executed in the right order.

  paths: {
    'jquery': 'vendor/jquery/jquery',
    'angular': 'vendor/angular/angular',
    'kendo': 'vendor/kendo/kendo',
    'angular-kendo': 'vendor/angular-kendo',
    'app': 'app'
  shim: {
    // make sure that kendo loads before angular-kendo
    'angular-kendo': ['kendo'],
    // make sure that 
    'app': {
        deps: ['jquery', 'angular', 'kendo', 'angular-kendo']

define(['routes'], function () {

  // create an angular application using the bootstrap method
  angular.bootstrap(document, ['app']);


Notice that the application is manually bootstrapped here. What this file is basically saying is, “load all of these files, then run angular on the document with ng-app set to ‘app’”. Since this file is loaded asynchronously by RequireJS, we have to use this “manual bootstrap” method to start the Angular application.

By the time that angular.bootstrap method is reached, all of the files have already been loaded. How does that happen? All via dependencies resolved by RequireJS. Notice above that the definefunction is asking for the routes.js file. RequireJS then loads this file before executing the angular.bootstrap method.

// routes.js

], function (app) {

  // app is the angular application object
  return app.config(['$routeProvider', function ($routeProvider) {
          templateUrl: '/app/partials/home.html',
          controller: 'homeController'
          redirectTo: '/home'

The routes.js file has declared that app.js is a dependency. Theapp.js file create the angular application object and exposes it so that the routes can be defined off of it.

// app.js

], function (controllers, index) {

  // the actual angular application module, passing
  // in all other modules needed for the application
  return angular.module('app', [

The app.js file creates the module and injects all of the required dependencies. This includes the ngRoute service, the Angular Kendo UI Directives and two other modules that we have yet to see, but were defined as dependencies in the top of the file. Those are thecontrollers/index.js file and the services/index.jsfile. Let’s break down the “controllers/index.js” file.

// controllers/index.js

], function () {

That code does nothing besides load dependencies. There is only one currently, but a larger application could and will have many, many controllers. All of those controllers would be loaded in this file. Each controller is then contained in a separate file.

// controllers/homeController.js

], function (module) {

  module.controller('homeController', ['$scope', '$productsDataSource',
    function ($scope, $productsDataSource) {
      $scope.title = 'Home';
      $scope.productsDataSource = $productsDataSource;

      $scope.listViewTemplate = '<p>#: ShipCity #</p>';


That’s the same old HomeController code, but it requires a module.jsfile. Another file?! Yep – last one for controllers. Its sole job is to create the app.controllers module so that it’s available when we try and create a controller off of it in any controller file.

// controllers/module.js

], function () {

  return angular.module('app.controllers', []);


Let’s recap what just happened since that was pretty intense.

  • “main.js” requires “routes.js”
    • “routes.js” requires “app.js”
      • “app.js” requires “controllers/index.js”
        • “controllers/index.js” requires all controllers
          • all controllers require “module.js”
            • “module.js” creates the “app.controllers” module

That’s kind of a hairy dependency tree, but it scales really well. If you add a new controller, you just add the “controllers/nameController.js” file and add that same dependency to the “controllers/index.js” file.

The services work the same exact way. The app.js module requires the services/index.js file which requires all services. All services each require the services/module.js file which simply creates and provides the app.services module.

Back in the app.js file, all of these items are loaded in and passed to the Angular application module that we created. The very last thing that happens is that angular.bootstrap statement in the main.jsfile. Basically, we started at the top and worked our way to the bottom.

It’s far from ideal though.

RequireJS is forced to load all of the application code before the application ever runs. That means no lazy loading of code. Of course, you could make the argument that you should be using r.js to build all of your code into one file anyway, but you are still forcing the browser to load and parse every single bit of your code. I would consider that a micro-optimization though. If you find yourself with a bottleneck caused by JavaScript parsing, you may have just written Gmail, and you’ve got much bigger problems than how to structure your modules.

Browserify Or Require Or ?

I’ve already professed my preference for Require in most situations, but I actually believe that Browserify is better for AngularJS applications; if nothing else because you get to remove the async component, which really drops several levels of complexity.

Browserify and RequireJS are not the only module loaders on the planet. There are several others that are up and coming and worth looking into. I’ve recently heard great things about WebPack, which apparently not only works with AMD and CommonJS, but also any assets that might be going from the server to the client. It also handles pre-processors like LESS, CoffeeScript, Jade and others.

What module loader do you use with AngularJS? Have an opinion about Browserify vs Require? What about the Angular Seed Project? There are lots of options out there and I would love to know what everyone else is doing to get a structure that is as sexy and robust as Angular itself is.

Blueprint photo by Will Scullin

Cat photo titled “Angry Tiger” by Guyon Moreé

发表在 Linux | 留下评论

php、nginx 安装脚本


# 安装依赖
yum install -y gcc gcc-c++ autoconf libjpeg libjpeg-devel libpng libpng-devel freetype freetype-devel libpng libpng-devel libxml2 libxml2-devel zlib zlib-devel glibc glibc-devel glib2 glib2-devel bzip2 bzip2-devel ncurses curl openssl-devel gdbm-devel db4-devel libXpm-devel libX11-devel gd-devel gmp-devel readline-devel libxslt-devel expat-devel xmlrpc-c xmlrpc-c-devel libmcrypt-devel

# 编译
./configure --prefix=/usr/local/php --with-mysql --with-mysql-sock --with-mysqli --enable-fpm --enable-soap --with-libxml-dir --with-openssl --with-mcrypt --with-mhash --with-pcre-regex --with-sqlite3 --with-zlib --enable-bcmath --with-iconv --with-bz2 --enable-calendar --with-curl --with-cdb --enable-exif --enable-fileinfo --enable-filter --with-pcre-dir --enable-ftp --with-gd --with-openssl-dir --with-jpeg-dir --with-png-dir --with-zlib-dir --with-freetype-dir --enable-gd-native-ttf --enable-gd-jis-conv --with-gettext --with-gmp --with-mhash --enable-json --enable-mbstring --disable-mbregex --disable-mbregex-backtrack --with-libmbfl --with-onig --enable-pdo --with-pdo-mysql --with-zlib-dir --with-pdo-sqlite --with-readline --enable-session --enable-shmop --enable-simplexml --enable-sockets --enable-sysvmsg --enable-sysvsem --enable-sysvshm --enable-wddx --with-libxml-dir --with-xsl --enable-zip --enable-mysqlnd-compression-support --with-pear

# 编译、安装
make && make install


yum install pcre-devel openssl-devel zlib-devel -y

# 配置
./configure –prefix=/usr/local/nginx –user=www —group=www –with-http_stub_status_module –without-http_rewrite_module –with-http_ssl_module –with-pcre

# 编译、安装
make && make install


yum install c-ares-devel libuuid-devel -y
groupadd mosquitto
useradd -s /sbin/nologin mosquitto -g mosquitto -d /var/lib/mosquitto


rabbitmq-server -detached


发表在 Linux | 留下评论

Rabbitmq command doesn’t exist?

I have rabbitmq installed via home brew and when I go to /usr/local/sbin and run rabbitmq-server it states that: rabbitmq-server: command not found even as sudo it states the same error.

How do I get rabbitmq to start if it’s not a command? I have also tried chmod +x rabbitmq-serverin that directory to get it be an executable, same issue.


From the docs:

The RabbitMQ server scripts are installed into /usr/local/sbin. This is not automatically added to your path, so you may wish to add PATH=$PATH:/usr/local/sbin to your .bash_profile or .profile. The server can then be started with rabbitmq-server.

All scripts run under your own user account. Sudo is not required.

You should be able to run /usr/local/sbin/rabbitmq-server or add it to your path to run it anywhere.

Your command failed because, by default, . is not on your $PATH. You went to the right directory (/usr/local/sbin) and wanted to run the rabbitmq-server that existed and had exec permissions, but by typing rabbitmq-server as a command Unix only searches for that command on your $PATH directories – which didn’t include /usr/local/sbin.

What you wanted to do can be achieved by typing ./rabbitmq-server – say, execute the rabbitmq-server program that is in the current directory. That’s analogous to running /usr/local/sbin/rabbitmq-server from everywhere – . represents your current directory, so it’s the same as /usr/local/sbin in that context.

发表在 Linux | 留下评论

exception ‘ReflectionException’ with message ‘Class UserTableSeeder does not exist’

The default Laravel 5 project has a classmap defined in its composer.json:

    // ...
    "autoload": {
        "classmap": [
        // ...

Run composer dump every time you add or remove a class on your database directory to update the Composer autoloader

Reference: https://github.com/laravel/laravel/blob/develop/composer.json

发表在 Linux | 留下评论